Home / Archive by category "Productivity"

Using the Pivotal Tracker API to automatically deliver stories

Like many agile shops, our company uses PivotalTracker as our project management system. Our QA deployment process is nearly completely automated, but we still had to manually log in to Pivotal to “deliver” the stories after everything was successfully pushed to QA. Not a difficult task to be sure, but the build server typically works after hours, and QA is offshore on the other side of the world, so there was always a disconnect between what was marked as delivered in Pivotal and what was actually delivered to the QA environments.

Fortunately the Pivotal has an API that is very easy to work with. To get an API key simply log in to Pivotal, go to your profile page and scroll down. This token should be included in the header of each request. For my integration, I chose to develop a simple PowerShell script since we use PowerShell extensively on the build server already:

# call the Pivotal API to search for all finished stories, labeled with the tag for this release 
$webClient = new-object System.Net.WebClient 
$webClient.headers.Add('X-TrackerToken', $pivotalToken) 
$url = "http://www.pivotaltracker.com/services/v3/projects/{0}/stories?filter=label%3A%22{1}%22%20state%3Afinished" -f $pivotalProjectId, $tag 
$data = $webClient.DownloadString($url)
# parse the XML
[xml]$storiesXml = [xml]($data)
# read all of the stories, and save their ids into an array
 $finishedStories = @()
 foreach ($story in $storiesXml.stories.story)
 {
     $storyId = $story.selectSingleNode('id').get_innerXml()
     $finishedStories += $storyId;
 }
# prepare to PUT XML
 foreach ($storyId in $finishedStories)
 {
     $url = "http://www.pivotaltracker.com/services/v3/projects/{0}/stories/{1}" -f $pivotalProjectId, $storyId
     $webClient.headers.Add('Content-type', 'application/xml')
     $webClient.UploadString($url, 'PUT', "delivered")
 }

Any scripting language or a simple console app would do. Pivotal also has an API method to deliver all stories that are marked as finished, but we needed to only deliver stories matching a certain tag, hence the initial search and looping through those stories.

You can read more about the Pivotal API from their web site:

https://www.pivotaltracker.com/help/api?version=v3

 

Automating SQL reports with sqlcmd and sendemail

The Windows platform certainly has its advantages, but one of the things I miss about working in a *nix environment is the power of the command line. There’s something satisfying about automating ridiculously complex tasks by editing one line in a text file on your system, then forgetting about it.

Case in point. I have a data set that we are building, and every morning I log in, open up SQL management studio, and execute a query to check on its progress, then email someone else with that report. The SQL server in question is on a hosted server, and doesn’t have reporting, business intelligence, and I don’ t even have admin access to the box (the shame!). On a linux box I’d just do a cron job to pipe the mysql query results to sendmail and be done with it, but how to do this on a Windows box?

There are a few options. Windows doesn’t ship with a build-in command line mailer, but there are some open source options. There is a version of the classic sendmail available for windows, but that involves some setup and editing of INI files, and I don’t have the patience for that. There’s also a delightful little program called “Sendemail.exe“, with no external requirements. Simply download the exe, add it to your PATH and you can send emails from the command line with ease. In my case, I can now take the input of that daily query, run it via sqlcmd (you should already have sqlcmd if you’re doing any SQL dev). and pipe it to sendemail.exe. You will need to send the “from” and “to” addresses, and your SMTP server:

sqlcmd -U your_ sql_user -P  your_sql_pass -S your_sql_server -d your_sql_db -Q your_sql_sproc | sendemail -f auto@home -t your_email_address -s your_smtp_server

There is one major flaw to sendemail.exe. Messages get truncated at around a page or two (32000 characters perhaps?), but for quick little queries it gets the job done.

 

 

http://caspian.dotconf.net/menu/Software/SendEmail/#tls

 

 

fs

Free Automated Backups For Your Windows PC

Windows Backup does an adequate job, but unfortunately in Windows 7 you need the Professional or Ultimate edition to back up to a network share. If you want to save your backups to a network device, or even just to sync files between two computers on your home network, Windows 7 Home won’t do, but I’m not about to pay for the upgrade to Pro just to sync files between computers. Fortunately there is a free solution with SyncToy.

SyncToy is a free “PowerToy” offered by Microsoft that lets you set up folder pairs to be synchronized. Out of the box it does not include any built-in scheduling, but it does offer a command line interface making it easy to roll your own scheduling.

First, download SyncToy and set up the folder pairs you want to synchronize. Each folder pair has a name: remember this for later. You can synchronize folders between machines across the network, or to an external USB drive. Once you have your folder pairs set up you can create a simple batch file (yes, remember batch files!) and call the SyncToy command line interface, like so:

"C:\Program Files\SyncToy 2.1\SyncToyCmd.exe" -R FOLDER_PAIR_NAME

If your folder pair name has spaces in it you may find the command line interface temperamental – just rename it to something without spaces.

Once your batch file is created simply set up a schedule in the task scheduler.

If you are ssynchronizing to a network share, you may want to check that the share exists before running SyncToy. This isn’t necessary, but it does prevent SyncToy from searching for the share and using resources when it shouldn’t run.

IF NOT EXIST "\\PATH\TO\NETWORK\SHARE\." GOTO EXIT
"C:\Program Files\SyncToy 2.1\SyncToyCmd.exe" -R FOLDER_PAIR_NAME
:EXIT

Reducing your Amazon S3 costs…. with a catch

Amazon just recently announced a “Reduced Redundancy Storage” option for S3 objects. In short, you can slash the costs of S3 storage by 33% by accepting a slightly greater chance of losing your data. So ask yourself…

Do I feel lucky? Well, do ya, punk?

In truth, the costs of any data loss in Amazon S3 are minuscule, under both the traditional model and under RRS. If you use S3, I highly recommend starting with the Vogels’ article on RRS and durability.

The same goes for durability; core to the design of S3 is that we go to great lengths to never, ever lose a single bit. We use several techniques to ensure the durability of the data our customers trust us with, and some of those (e.g. replication across multiple devices and facilities) overlap with those we use for providing high-availability. One of the things that S3 is really good at is deciding what action to take when failure happens, how to re-replicate and re-distribute such that we can continue to provide the availability and durability the customers of the service have come to expect. These techniques allow us to design our service for 99.999999999% durability.

Under RRS, instead of 99.999999999% durability, your object is only stored in such a way that is will survive a single data loss, or 99.99% durability:

We can now offer these customers the option to use Amazon S3 Reduced Redundancy Storage (RRS), which provides 99.99% durability at significantly lower cost. This durability is still much better than that of a typical storage system as we still use some forms of replication and other techniques to maintain a level of redundancy. Amazon S3 is designed to sustain the concurrent loss of data in two facilities, while the RRS storage option is designed to sustain the loss of data in a single facility. Because RRS is redundant across facilities, it is highly available and backed by the Amazon S3 Service Level Agreement.

Yes, it’s still covered by the SLA! Finally, to summarize the real risk in terms your manager can undterstand, take this from the RRS announcement on the AWS blog:

The new REDUCED_REDUNDANCY storage class activates a new feature known as Reduced Redundancy Storage, or RRS. Objects stored using RRS have a durability of 99.99%, or four 9’s. If you store 10,000 objects with us, on average we may lose one of them every year. RRS is designed to sustain the loss of data in a single facility.

I suspect that for most business applications 99.99% durability is “good enough” and a 33% savings cost is an great trade-off.

Finally, for my fellow .NET developers… Amazon did update their .NET SDK with this announcement. Be sure to download the latest version.

6 Amazing Techniques to Staying Happy During a Stressful Project

Another good post from Zen Habits. Here’s an excerpt:

2. Break The Project Down Into Tiny Chunks
The length of my list on any given day would scare even Warren Buffett. I expect too much and never feel satisfied.

Instead of getting a whole bunch of stuff done, my brain often shuts down in response to my overwhelming list. I need to figure out a way to reduce those negative thoughts by listening to them and reasoning with them. I have tried being a big bully and forcing myself to do work, but this technique always lacked results.

I’ve created a routine that allows me to handle my work load. When I realize that I have overextended my “to do” wish list, I stand up, breathe deep, and let out the air as I take a moment to refill my glass with water.

I then break down my first big task into twenty minute chunks. When the project seems more manageable, I pick an easier 20 minute chunk and accomplish it. By doing this I boost my confidence and get my emotions back into a positive state.