Home / Archive by category "Programming"

Azure SQL Database Migration Fail: The system cannot find the file specified error

As a preface, I must say that I’m quite impressed with Microsoft’s offering in the cloud space. White not quite as mature as AWS, their platform-as-a-service, integrated with visual studio, in tandem with the excellent Azure web management portal, is incredibly enticing if you’re developing on the Microsoft stack.

I lead with that preface to be clear that this isn’t a gripe… I just ran into an error that was ultimately trivial in the solution, but troublesome to debug. When you’re deploying to bare metal or even a virtual machine, you can typically just remote into the web server to view the configuration files or error logs. No so with a cloud service. It pays to read all the documentation and tutorials before hand to ensure you understand the platform.

In this case, I had a ASP.NET MVC application that had been working fine locally, and working fine on a Amazon AWS instance with both IIS and SQL Server installed. Conversion to an Azure cloud service was a snap – after installing the Azure tool kit conversion is as simple as a right-click on the project file, and choosing the convert to Azure service option.

I imported the SQL Express database to Azure using the open source migration toolkit, which was a breeze. At first when testing locally I couldn’t connect to the SQL Database in Azure, but the error message was very helpful directing me to open the firewall to my local machine. Once that was done, I could run the project locally without any problem.

However, after uploading the web project to the Azure cloud service, it could not find the database:

System.Data.ProviderIncompatibleException: An error occurred while getting provider information from the database. This can be caused by Entity Framework using an incorrect connection string. Check the inner exceptions for details and ensure that the connection string is correct. ---> System.Data.ProviderIncompatibleException: The provider did not return a ProviderManifestToken string. ---> System.Data.SqlClient.SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) ---> System.ComponentModel.Win32Exception: The system cannot find the file specified

I confirmed the SQL database firewall was configured to allow access to other services. I confirmed the connection strings were correct in the web.config file. Still no luck.

Finally, it was a *doh* moment. While the connection string was correct in the base web.config file, it was incorrect in the Web.Release.config file. In this file, it was configured to use LOCALHOST – which happened to be set up locally. Works fine on my machine… as we say.

Of course, if this little project had been deployed to a multi-machine architecture before, we would have found the error then. Initially, I wanted to blame Azure for the error, but the error would have been the same even if we had deployed to a VPC in azure with separate web servers and database instances.

Facebook, iPads, and the dangers of ASP.NET browser detection

ASP.NET webforms automatically tries to detect the capabilities of the user’s web browser to render appropriate HTML and javascript to the client. It does so by parsing the user-agent string and doing a series of regular expression matches to try and identify the browser in its local database of browser capabilities.

Yes, user-agents and regular expressions, just like it’s 1999.

Obviously, this can go horribly wrong.

Recently we had an issue in production where users on iPads were being presented with a truly wretched user experience. Javascript was completely disabled, layout was skewed, design elements were misplaced…. We finally tracked the issue down to an issue with the browser definition files on an older web server, and the unique user-agent string that gets sent when a user browses from within the Facebook iPad application. If you’re on an iPad using Safari, your user agent string will typically look something like this:

Mozilla/5.0 (iPad; U; CPU iPhone OS 3_2 like Mac OS X; en-us) AppleWebKit/531.21.10 (KHTML, like Gecko) Version/4.0.4 Mobile/7B314 Safari/531.21.10

In an older (ASP.NET version 4.0) version of the browser definition files, this won’t be recognized as an iPad, because iPads didn’t exist yet. But the “Safari” and “Version/###” in the user agent string will be picked up by the safari.browser file, and you’ll at least get javascript enabled.

However, if you’re browsing from within the Facebook app your user agent string will be:

Mozilla/5.0 (iPad; CPU OS 6_0_1 like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko) Mobile/10A523 [FBAN/FBIOS;FBAV/5.3;FBBV/89182;FBDV/iPad2,5;FBMD/iPad;FBSN/iPhone OS;FBSV/6.0.1;FBSS/1; FBCR/;FBID/tablet;FBLC/en_US]

Notice no “Safari” within the user agent string. In our case ASP.NET couldn’t recognize the browser at all, so it downgraded the experience to no javascript and downgraded table layout.

This issue can be fixed with custom definitions in your App_Browsers folder, or by installing .NET 4.5 with the updated browser definition files on your server.

Or better yet, ditch webforms entirely and use simple layout with progressive enhancement. Even patching the servers, this ancient method of parsing user agent strings is not a good long term strategy.

 

SVN Merge, Reintegrate, and Release Practices

As the unofficial CM / release manager at my current position, I’ve learned quite a bit about SVN and release management – typically by making mistakes. I won’t pretend that this is the one best practice to rule them all – any practice should be tailored for your company – this has worked out well for us.

We have a fairly typical agile/scrum based environment, with the exception that we release to production monthly instead of having weekly releases. If you’re releasing weekly then your feature sets and branches are typically quite small, and you can get away with hardly paying any attention to your branching strategy.

Furthermore – we branch often. Very often. Most of our development with is integrations with other companies, so it’s not uncommon to make a branch and spin up an integration for every partner’s project. It’s not uncommon to have more branches than we have developers, with most branches being used for integration testing – release date unknown.

The strategy we have adopted is to have one main release line, with a branch for each monthly release. The majority of our development happens in this line, and all releases happen from this line. We do not release from trunk. trunk represents an exact copy of the code that is in production, and is reserved only for emergency support or critical bug fixes. Immediately after a release, that branch is reintegrated back into trunk, and the branch is shut down for commits. Feature branches are typically branched from trunk, since their release date is unknown:

svn-before

 

Before subversion 1.6, this pattern of branch-merge-reintegrate was very difficult, since subversion didn’t support merge tracking and you had to manually record every revision that was recording into every branch. Fortunately, this is much easier today. Here’s how:

Before proceeding, make sure that :

  1. All changes from trunk have been merged into the feature branch. This is absolutely required before a reintegrate.
  2. Ensure that all changes from the feature branch have been merged to any subsequent branches. Or, all changes from “MonthlyRelease” are merged to “NextMonthlyRelease”
  3. Ensure that all changes from trunk have been merged to any other feature branches, as needed. Or, all changes from “trunk” are merged to “Feature Branch”

For the purposes of this example, I’ll assume you are using TortoiseSVN on windows (the same can be accomplished via the svn command line options, and you can even automate it via that route).

Switch your local working copy to trunk, or check out a copy of trunk into a new folder if you don’t want to switch.  Right click on the working copy of trunk, select Tortoise SVN, and then the “Merge” option.

In the dialog that pops up, select “Reintegate a branch”

In the reintegrate merge dialog, enter the URL of the feature branch that you wish to reintegrate. In our example, that would be the url for “MonthlyRelease”

As with any big merge, you should do a “test merge” first to identify any problems. But if the feature branch contains all merges from trunk and no merges from any other branches, the reintegrate should be easy.

After testing, click “Merge” on the merge options dialog and all changes from the feature branch will be applied to your local working copy of trunk.

Immediately commit all changes to the repository. Do not make any other changes before committing the reintegration. Use a prefix or easily identifiable commit message so you can easily find this commit later.

Trunk now contains all commits from the feature / release branch. You can safely set the MonthlyRelease branch to read-only and close it to any further commits.

Finally, we need to address the reintegrate in our other living branches:

svn-after

“Next MonthlyRelease” already has all of the commits that were in “MontlyRelease”. We do not want to apply these commits into this branch again, because they have already been applied. We can do this with a “record only” merge, which tells subversion that a commit has been applied to a branch, without actually applying any changes from that commit.

Switch to the “NextMonthlyRelease” branch, and then select “Merge” from the TortoiseSVN menu. Use trunk as the URL to merge from, and select a range of revisions by clicking the “Show Log” button. Since the feature branch has already been merged to this branch select that commit from the log (you remembered the comment for that commit, right?).

Click “Next” and on the merge options dialog, select “only record the merge”. The one commit that includes all changes from “MonthlyRelease” will be marked as merged into this branch, and subversion won’t try to re-merge the changes later.

Merging trunk into our “Feature Branch” is just a standard merge – we do NOT want to do a record-only merge, because we want all the changes from MontlyRelease to be applied to this branch.

After trunk has been merged to any and all feature branches – the reintegrate is complete.

 

Upgrading ASP.NET MVC 1.0 to 2.0 Gotchas

I know ASP.NET MVC is on version 4 now… and version 1 is soooooo old news…. but our company finally got around to upgrading some old applications from ASP.NET MVC 1, and I thought I’d share some of the troubles we ran into. Admittedly, many of the issues were due to us using poor coding practices that the ASP.NET MVC team decided to stop supporting, but poor coding practices are rampant, so you may run into a few of these issues as well.

1. JsonResult no longer supports GET requests by default. This is easy to overcome, just by setting JsonRequestBehavior to AllowGet in all your responses.

2. HtmlHelpers no longer return strings, but return MvcHtmlStrings. For most people, this shouldn’t be a problem. But we had one application that made extensive use of HtmlHelpers to get around the lack of templates in MVC 1.0, and all of these needed to be re-written to return MvcHtmlStrings instead. The main problem is that you can get your app to compile, but if you have any views that reference these helpers and expect them to return strings…. expect errors from QA.

3. I’m murky on this part, but there’s an extra ModelBinding step that is less forgiving with properties that may throw exceptions. In one of our views we had a model with nullable properties. MVC 1 seems to only bind to the model after it is initialized, but MVC will try to get the properties on the model earlier. If there are properties that must be initialized before the getter is called, expect exceptions to be thrown. GetValueOrDefault() is your friend here.

4. MVC 2 is less forgiving of HTML violations in ID names. We had some fields where the id contained pipes or other invalid characters. Most browsers are quite forgiving of this, but the Html.TextBox() method in MVC2 will “helpfully” replace pipes or other invalid characters with underscores. This can be quite a surprise to any javascript referencing these fields.

Automating SQL reports with sqlcmd and sendemail

The Windows platform certainly has its advantages, but one of the things I miss about working in a *nix environment is the power of the command line. There’s something satisfying about automating ridiculously complex tasks by editing one line in a text file on your system, then forgetting about it.

Case in point. I have a data set that we are building, and every morning I log in, open up SQL management studio, and execute a query to check on its progress, then email someone else with that report. The SQL server in question is on a hosted server, and doesn’t have reporting, business intelligence, and I don’ t even have admin access to the box (the shame!). On a linux box I’d just do a cron job to pipe the mysql query results to sendmail and be done with it, but how to do this on a Windows box?

There are a few options. Windows doesn’t ship with a build-in command line mailer, but there are some open source options. There is a version of the classic sendmail available for windows, but that involves some setup and editing of INI files, and I don’t have the patience for that. There’s also a delightful little program called “Sendemail.exe“, with no external requirements. Simply download the exe, add it to your PATH and you can send emails from the command line with ease. In my case, I can now take the input of that daily query, run it via sqlcmd (you should already have sqlcmd if you’re doing any SQL dev). and pipe it to sendemail.exe. You will need to send the “from” and “to” addresses, and your SMTP server:

sqlcmd -U your_ sql_user -P  your_sql_pass -S your_sql_server -d your_sql_db -Q your_sql_sproc | sendemail -f auto@home -t your_email_address -s your_smtp_server

There is one major flaw to sendemail.exe. Messages get truncated at around a page or two (32000 characters perhaps?), but for quick little queries it gets the job done.

 

 

http://caspian.dotconf.net/menu/Software/SendEmail/#tls

 

 

fs

How to install Windows 8 with Virtual Box

So….. Microsoft announced their preview for Windows 8 tonight, complete with a fully function “developer preview” available for download.

Download the ISO from MSDN, here: http://msdn.microsoft.com/en-us/windows/home/

If you don’t already have VirtualBox installed, go ahead and download and install that. At this point, Windows 8 does not work with VMWare.

Once everything is downloaded and installed, go to VirtualBox and click on “New” to create a new virtual machine image.

Windows 8 isn’t an option yet (duh…), so just select “Other” as the OS type.

 

At this point I just went with the defaults. Max memory while in the “green” zone for your system.For disk size, you’ll need at least 7GB for the hard disk size, obviously I’d go larger than this if  you plan to do anything other than peek at the new OS. If you choose “dynamically generated” disk size, you won’t be able to install the OS – so pick something now.

Creating this virtual hard disk may take a few minutes. More wating….

And finally, once the VDI is created and your virtual box is ready, go to “Setting” and click on “Storage”.  Select the downloaded ISO file as your CD drive by clicking on the little CD icon all the way to the right:

Believe it or not, at this point you can boot! Click on “Start” to load up the virtual OS. You’ll likely get all kinds of errors about trapping mouse movements – just remember the right-control button frees your mouse. “Custom” install worked flawlessly for me.  Enjoy!

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Creating Bit.ly shortened URL’s from Windows Phone 7 in C#

You’re no doubt familiar with URL shorteners on sites such as twitter, where they are practically mandatory. For example, my previous post,  How to Create Screenshots for Windows Phone 7 Marketplace without a Phone, weighs in at a hefty 109 characters. And that’s without any tracking codes that may be mandatory for marketing or affiliate programs. But with the use of any URL shortener (Bit.ly in this case), we’re down to a mere 20 characters (http://bit.ly/rc72Ac).

URL shorteners have other advantages as well. If you’re working with affiliate programs typically the only way you get paid is by placing your affiliate code in the URL itself. Affiliate directories will also typically add some tracking variables to the URLs, and you’re left with a long ugly URL that’s downright unseemly to include in an email, share on a facebook wall, or anywhere else you cannot control the link text being presented. And there is alos the cance people will use the URL and just omit your affiliate code, for whatever reason. URL shorteners can help prevent this.

In addition, most provide some excellent tracking and analytics  – often in real time and for free. You can view number of clicks,  referrers, county of origin, even get a QR code if you wish.

Any decent URL shortener will come with an API, and many of these services have C# libraries for interfacing with them directly. I chose  bit.ly for this example, but most services have a very similar API.

Signing up for an account is straigtforward, and once you do so you’ll automatically have an API key on your “Settings” page. There is already a codeplex project for a bitly library, but unfortunately it does not work for Windows Phone 7. Most other examples found online use synchronous calls which is not permitted in WP7 programming (for good reason), so we’ll start from scratch.

To make any API call, you’ll need to supply your username, and your API key. Simply log in to your account and go to http://bitly.com/a/your_api_key. Bitly offers a standard REST based API, the full documentation can be found by following the “API” link at the bottom of their page. Documentation for the method we’ll be looking at to shorten a URL can currently be found here:

/v3/shorten

For a long URL, /v3/shorten encodes a URL and returns a short one.

Parameters

  • format (optional) indicates the requested response format. supported formats: json (default), xml, txt.
  • longUrl is a long URL to be shortened (example: http://betaworks.com/).
  • domain (optional) refers to a preferred domain; either bit.ly, j.mp, or bitly.com, for users who do NOT have a custom short domain set up with bitly. This affects the output value of url. The default for this parameter is the short domain selected by each user in his/her bitly account settings. Passing a specific domain via this parameter will override the default settings for users who do NOT have a custom short domain set up with bitly. For users who have implemented a custom short domain, bitly will always return short links according to the user’s account-level preference.
 Two important points: the URL must be URL encoded. No spaces, question marks, or any other odd characters. Also, the format parameter can specify either text , XML, or JSON. Text is the simplest to work with – only the shortened URL is returned. If you’re only working with one link to shorten at a time, this is an obvious choice. However,  if you’ll be sending multiple requests to bitly at one time, or you can’t guarantee the return order of your requests, you’ll want to use XML or JSON. Both of these return both the shortened URL and the original, so you can match them if necessary.
For this example, we’ll just use text, since in my app we’ll never be submitting multiple requests per page. To shorten the URL in bitly, all you need is open a web request to the URL specified by the API:
  1.  
  2. string url = string.Format(@"http://api.bit.ly/v3/shorten?login={0}
  3. &apiKey={1}&longUrl={2}&format=txt",
  4. BITLY_LOGIN, BITLY_API_KEY, HttpUtility.UrlEncode(longUrl));
  5.  
  6. WebClient wc = new WebClient();
  7. wc.OpenReadCompleted += new OpenReadCompletedEventHandler(wc_OpenReadCompleted);
  8. wc.OpenReadAsync(new Uri(url));
  9.  

Since we specified the text format, the result will contain the URL only:

  1.  
  2. void wc_OpenReadCompleted(object sender, OpenReadCompletedEventArgs e)
  3. {
  4. Stream stream = e.Result;
  5. var reader = new StreamReader(stream);
  6. var shortenedUrl = reader.ReadToEnd();
  7. }
  8.  

I’ve wrapped the above methods in a class that you can find on github. To use, simple call the Shorten method with a callback, like so:

  1.  
  2. new WP7NetHelpers.BitlyShorten().Shorten(
  3. HttpUtility.UrlDecode(ProductFeed.Instance.URL),
  4. this.ShortenCallback);
  5.  
  1.  
  2. private void ShortenCallback(string url)
  3. {
  4. // do something with url;
  5. }
  6.  

Enjoy!

How to Create Screenshots for Windows Phone 7 Marketplace without a Phone

One of the best and worst features of developing for windows phone 7, as opposed to the iPhone, is that it’s possible to develop and publish a windows phone 7 app without ever laying hands on an actual device that can actually run it. I’m sure this will lead to worse apps in the marketplace, because you should *always* test your apps on an actual device before publishing. But it’s nice that you don’t have to.

While the Windows Phone 7 emulator is great in many ways, I was annoyed by one thing… the ugly performance  status numbers that show up in debugging mode:

 

Yes, I know these are terribly useful when trying to work out bugs in performance and screen refresh issues (still a problem…) but it does make getting screenshots for the Windows Phone 7 marketplace more difficult. Photoshop them out? Take a screenshot on the phone itself and email it to you?

The solution is so simple…

  1. When you’re debugging your app, just click on Stop Debugging (Shift+F5 in VS express for phone). The emulator is still running, with your app installed.
  2. Click the Start button in the emulator
  3. Click the right arrow for the list of apps
  4. Find your app. Run your app. Now without annoying performance counters.
  5. Use built-in snipping tool (Start > All Programs > Accessories > Snipping Tool ) or graphics program of your choice to get screenshot.

 

That’s it! A little cropping or touch-up in Paint.net and you’re ready for the marketplace.

 

 

 

 

A Quick Tour of Amazon’s Mobile App Developer Program

OK, my mobile app isn’t quite ready yet, but this post from the people at AWS caught my attention. One of the main difficulties in developing Android applications is that there’s not one app store (not even one draconian one), but several different app stores available. Amazon hopes to fill that void by developing its own app store for any Android device, and while only time will tell if it is successful, given Amazon’s track record of quality and market reach any mobile developer needs a foothold here. If you sign up now, it’s free for the first year:

If you are using the SDK to build an Android application, I would like to encourage you to join our new Appstore Developer Program and to submit your application for review. Once your application has been approved and listed, you’ll be able to sell it on Amazon.com before too long (according to the Appstore FAQ, we expect to launch later this year). If you join the program now we’ll waive the $99 annual fee for your first year in the program.

You can list both free and paid applications, and you’ll be paid 70% of the sale price or 20% of the list price, whichever is greater. You will be paid each month as long as you have a balance due of at least $10 for US developers and $100 for international developers. The Amazon Developer Portal will provide you with a number of sales and earnings reports.

The store will provide rich merchandising capabilities. Each product page will be able to display multiple images and videos along with a detailed product description.

Joining the program is simple. If you already have an Amazon.com custome or affiliate account (and who doesn’t), you can simply use that account:

After this, it’s about 4-5 confirmations until you’re signed up. Is this you name? Agree to terms of service? Agree to pay us the $99 after your first year? If you charge for apps, what’s your bank account info?

By the way… only a $10 minimum payout is very cool…

After that you’re in!

Of course, the rest of the site is incomplete. They do have samples of the submit an app page, reports, and account pages that are interesting. It looks like you’ll have considerable control over your application’s launch cycle- including pre-orders and limited release windows. The reports look basic but adequate for most developers. I do hope they open up an API that lets you get more information on the who/what/where of downloads… but it’s a welcome and much needed addition to the android marketplace.

ActiveReports and POCO data sources

I was recently working on a WinForms project that was pulling data from a RSS feed. Since all of the data was loaded from online and it was a relatively simple addition there was no need for a local database. The only problem came when I was tasked with using ActiveReports to create a report of this data, since ActiveReports expects things like DataTables, DataSets and you can’t have a POCO (Plain Old C# Object) act as a data source directly. Most online tutorials suggested making a fake DataTable from your objetcs and looping through your collection manually to add fields and rows, which seems somewhat tedious if you’re working with a large collection of possible fields. Fortunately ActiveReports supports XML data sources, so if your data is Serializable you can just plop it in as a data source, like so:

DataDynamics.ActiveReports.DataSources.XMLDataSource ds = new DataDynamics.ActiveReports.DataSources.XMLDataSource();
ds.FileURL = null;
ds.RecordsetPattern = "//Entry"; // or whatever your data is serialized as
var sw = new StringWriter();
var ser = new XmlSerializer(t);
ser.Serialize(sw, obj);
ds.LoadXML(sw.ToString());
this.DataSource = ds;

Once that’s done you can go about data binding your report fields to your POCO data memebers.