Tuesday, 2 June 2015

A Backgrid Cell for displaying a link to a model's Url

When creating web applications that use Backbone I've found the Backgrid library useful for rapidly creating a user interface with a lot of functionality. With just a few lines of javascript you can display a table of data that allows sorting and in-row editing, and which updates a backing Backbone collection, thereby hooking directly into the infrastructure of the rest of your web page. And, like all good javascript frameworks, it:
  • has very few dependencies - just Backbone, Underscore (which Backbone requires anyway) and jQuery (which you will almost certainly be using anyway).
  • packs down nice and small - just 25KB.
  • is very extensible...
So I'd like to share a very simple extension that I've created, a Backgrid cell that will render a hyperlink to the model that it is displaying.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
    /* Cell which displays a link to the model's Url. View can be extended to set:
    *   title - title of the link
    *   target - frame that the link targets, defaults to '_self'
    */
    var ModelUri = Backgrid.UriCell.extend({
        render: function () {
            this.$el.empty();
            var rawValue = this.model.get(this.column.get("name"));
            var formattedValue = this.formatter.fromRaw(rawValue, this.model);
            this.$el.append($("<a>", {
                href: this.model.url(),
                title: this.title || formattedValue,
                target: this.target || '_self'
            }).text(formattedValue));
            this.delegateEvents();
            return this;
        }
    });

When using this cell the 'name' of the column definition should point at the model attribute that contains the link text. The column definition doesn't need to contain any information about where to load the link target from as it uses model.url() for that. If you look at the source of Backgrid itself you'll see that I've changed very little from the render method of the UriCell cell. A quick commentary on what the code does:
  • On line 8 this.column.get("name") retrieves the name of the attribute that contains the link text (the 'column' object is, itself, a model).
  • Also on line 8, this.model.get(...) retrieves the value for the link text from the bound model.
  • Line 9 uses standard Backgrid functionality to transform the raw value retrieved from the model using the cell's formatter. My cell uses the default formatter that simply returns the value that is passed to it, but if you wanted to extend ModelUri further to use a different formatter then you could.
  • Line 11 sets the link target to the value of model.url().
  • Line 13 allows the target frame to be specified, defaults to '_self'.
  • Line 14 sets the link text to the formatted value.
To use the cell you would need code that looked something like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
 var UserCol = Backbone.Collection.extend({
  model: Backbone.Model,
  urlRoot: '/_users'
 });
 var grid = new Backgrid.Grid({
  selector: '#gridHolder',
  collectionClass: UserCol,
  comparator: 'reference',
  columns: [
   { name: "reference", editable: false, cell: ModelUri, label: "Filename" },
   { name: "processingState", editable: false, cell: 'string', label: "Status" }
  ]
 });

If you wanted the cell to render a link that would open the model url in a new tab/window then the 'target' property of the cell needs to be set to '_blank'. This can be achieved by further extending ModelUri; one way to do this is shown below:


1
2
3
4
5
6
7
8
9
 var grid = new Backgrid.Grid({
  selector: '#gridHolder',
  collectionClass: UserCol,
  comparator: 'reference',
  columns: [
   { name: "reference", editable: false, cell: ModelUri.extend({target: '_blank'}), label: "Filename" },
   { name: "processingState", editable: false, cell: 'string', label: "Status" }
  ]
 });


Hope you find this useful!

Monday, 1 June 2015

'add' and 'reset' events from Backbone Collections

In one of my current projects I'm using Backbone to lazy load and render data in a hierarchical structure (a treeview in this case). The data that I'm displayed is returned from the backend in database order, so I set the comparator property on the Backbone Collection to ensure a more friendly sort order. And this is where I encountered a slight 'gotcha' (it wouldn't be fair to call it a bug) which I thought it might be worth documenting in case it tripped anyone else up.

I've created a JS Bin to demonstrate the gotcha which I'll talk you through here:

1. Create a basic HTML page to hold everything


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
<!DOCTYPE html>
<html>
<head>
<meta name="description" content="JS Bin to demonstrate the fact that Backbone collections that have a comparator to ensure a certain sort order raise 'add' events in the unsorted order when multiple models are added via the 'Collection.set' method">
<script src="https://code.jquery.com/jquery-2.1.1.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.8.3/underscore-min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/backbone.js/1.1.2/backbone-min.js"></script>
  <meta charset="utf-8">
  <title>JS Bin</title>
</head>
<body>
  <h1>List appears below</h1>
  <ol>
    
  </ol>
</body>
</html>

2. Define a basic Collection and View


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
var ExampleCollection = Backbone.Collection.extend({
      comparator: 'title',
      model: Backbone.Model
    }),
    ExampleView = Backbone.View.extend({
      tagName: 'li',
      render: function() {
        this.$el.text(this.model.get('title'));
        return this;
      }
    });

I haven't defined a Model as I'm going to use the base Backbone.Model function to create my model objects. As you can see above:

  • ExampleCollection objects will sort their models by their 'title' attribute.
  • ExampleView objects will render a list item that will contain the 'title' attribute of their model. Note that the render function is not responsible for adding the view to the DOM (this is a fairly common approach in Backbone).

3. Create an ExampleCollection object and attach event handlers 

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
var col = new ExampleCollection(),
  $container = $('ol');

col.on('add', function(model, collection) {
  $container.append(new ExampleView({model: model, collection: collection}).render().$el);
});
col.on('reset', function(collection) {
  collection.each(function(model) {
    $container.append(new ExampleView({model: model, collection: collection}).render().$el);
  });
});

As you can see the two event handlers attached to 'add' and 'reset' ensure between them that whenever a model is added to the collection (whether it is added by a 'set' or by a 'reset') an ExampleView object will be created and appended to the only <ol> on the page.

4. Add some models to our collection and watch what happens!


1
2
3
4
5
6
7
col.set([new Backbone.Model({
  id: 1, title: 'Zebra'
}),new Backbone.Model({
  id: 2, title: 'Adder'
}),new Backbone.Model({
  id: 3, title: 'Porcupine'
})], {sort: true});

As we are passing {sort: true} to the set method you might expect that we will see a sorted list of models on screen, but what we actually see is this:

List appears below

  1. Zebra
  2. Adder
  3. Porcupine

Changing the line that populates the collection from col.set to col.reset results in what we expected to see first time; the models are now correctly sorted:

List appears below

  1. Adder
  2. Porcupine
  3. Zebra

Why is this? The answer is that when multiple models are added to the collection in the first example (using col.set([model1,model2], {sort: true})) although the collection is sorted correctly (you can verify this by debugging if you want) the 'add' events are raised using the original ordering of the models, not the sorted ordering. When multiple models are added using a 'reset' then the 'reset' event is passed the full, sorted collection, and so we add the view to the HTML in sort order.

You could grumble about this but I think such complaints are misplaced for two reasons:

  1. If you're populating an empty collection using multiple models then 'reset' is really the correct method to use rather than 'set'.
  2. The 'set' method is extremely versatile; it can be used to merge models, add models or to both merge and add models as part of a single operation; it can be called on sorted or unsorted collections; it can be used to populate an empty collection or make updates to a collection that is already populated. When 'set' is used in some of these scenarios it would not be meaningful or a worthwhile use of CPU time to raise the 'add' events in the sort order.
So that's why I think that this behaviour is a 'gotcha' rather than a bug. Something else worth noting is that this means that when populating collections by calling 'fetch' you need to use {reset: true} in the options to ensure that the 'reset' method is called and a 'reset' event raised.

Wednesday, 20 May 2015

(Trying to) use ADAL from Mono

For a recent project I needed to consume a Web API secured using Azure Active Directory from a Linux file server. I had an existing Windows program that did the same thing so I decided to try running it under Mono, the Linux port of .NET.


I hadn't used Mono before and was pleasantly surprised to discover that most things just worked (except for a few quirks with HttpWebRequest which I will describe in another post).  The only problem came when I tried to call Web API methods secured with Azure Active Directory. In the Windows program I was using the excellent Active Directory Authentication Library for .NET (ADAL). This wasn't working in the Mono version and the "Break on Exception" functionality in MonoDevelop isn't great so I created a simple test app for this:


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
using System;
using Microsoft.IdentityModel.Clients.ActiveDirectory;
using System.Net;

namespace BearerAuthTest
{
 class MainClass
 {
  public const string DownloadTarget = "https://yourapp.com/";
  public const string Authority = "https://login.windows.net/common";
  public const string ClientId = "e74f20b9-d3b3-4b68-8a54-088ea85b55a8";
  public const string ServerApplicationIdentifier = "https://your-azure-ad.com/your-server-app";
  public const string ClientRedirectUri = "https://your-azure-ad.com/your-client-app";

  public static void Main (string[] args)
  {
   AuthenticationContext context = new AuthenticationContext (Authority);
   AuthenticationResult result = context.AcquireToken(
    ServerApplicationIdentifier,
    ClientId,
    new Uri(ClientRedirectUri)
   );
   WebClient webClient = new WebClient ();
   string header = result.CreateAuthorizationHeader ();
   Console.WriteLine ("Header={0}", header);
   webClient.Headers.Add(HttpRequestHeader.Authorization, header);
   string downloaded = webClient.DownloadString(DownloadTarget);
   Console.WriteLine ("Downloaded:\n\n {0}", downloaded);
   Console.ReadLine ();
  }
 }
}

I've edited the constants to remove the references to my own application and Azure AD directory, but I'll just explain what they are:

  1. DownloadTarget - this is the Url in our Web API that we are attempting to do a GET from.
  2. Authority - Url of the Azure AD multi tenant app login screen.
  3. ClientId - Guid of the native application; we must have set this up in Azure AD and granted it access to our Web API.
  4. ServerApplicationIdentifier - Uri identifier of the the Azure AD application that the Web API uses for authentication,
  5. ClientRedirectUri - Redirect Uri of the native application.
Just to make that clear, there are two Azure AD applications involved here; the Web Application that hosts the Web API, whose configuration will look like this:


And the native application that is attempting to connect to the Web API, the configuration of which looks like this:


When I run this in MonoDevelop on my Ubuntu dev machine I get this exception when I hit line 18:


An exception occurs when attempting to create a WindowsFormsWebAuthenticationDialog object. If I had to guess I would say this object is probably trying to raise a dialogue box to attempt an authentication operation with a web endpoint (see, if you make the type name long enough you don't need documentation). Getting out ILSpy I can see that the Type Initialiser of the base class of WindowsFormsWebAuthenticationDialog is calling a native method in IEFrame.dll - obviously that's not going to be there on a Linux machine, hence the exception.

I didn't give up hope yet though, because I'd read in a post on on Vittorio Bertocci's blog that it is possible to request an Azure AD authentication token without raising the authentication dialogue, passing the username and password in the request. So I replaced lines 18-22 of my code above with this:


AuthenticationResult result = context.AcquireToken (ServerApplicationIdentifier, ClientId,
 new UserCredential ("user1@your-azure-ad.com", "YourPassword"));

And this worked! I thought I'd cracked it, so I updated my code and deployed it only to find that as soon as I tried to use it I got an "Operation is not Supported" error. After a bit of head scratching I worked out that the only significant difference was that I was trying to log in with a user from a different Azure AD directory, my live directory rather than my test one. And this turns out to be the significant difference because:
  1. The native "Your Client App" app and the test user are both from the "your-azure-ad.com" directory, which means that the users from this domain do not need to grant consent to be able to use the app.
  2. The native app and the live user are from different directories, so when a token is requested using the AcquireToken method the user does need to give their consent.
  3. Re-reading Vittorio's post on the AcquireToken overload that allows you to pass a username and password in the request he says (in the Constraints and Limitations section):
"Users do not have any opportunity of providing consent if username & password are passed directly."
So I'm officially stuffed. Multi-tenant apps will always need to establish consent and that can only be done by showing the UI, which depends on native Windows components.

I'm not without hope that this might work in the future; the pre-release of ADAL 3 includes support for raising the authentication dialogue on Android and iOS platforms, so maybe this will work on other Mono platforms eventually.

In the meantime for my app I have had to pursue other options for authenticating from Mono. I am currently using Client Certificates which seems to be working OK, although as there doesn't seem to be anyone who supplies these commercially I'm generating them myself.

Monday, 11 May 2015

NuGet Package Restore and Unit Test filtering in TFS Build Online

In a previous post I blogged about how to set up a project so that the NuGet packages aren't added to source control. This post is a sort-of sequel to that one, looking at basic steps in getting the same project building in TFS Build and having its unit tests run by TFS Build. In my case I'm using TFS Build Online, but the steps should be pretty similar in an on premise installation of TFS Build.

We're aiming to end up with this:



This is what we do:

1. Create a new Build Definition using an appropriate build process file

New Build Definitions are created in Visual Studio 2013 in Team Explorer, Builds, click on the "New Build Definition" link.

You are shown a tabbed properties sheet. Most of the tabs are fairly self explanatory but the most complicated one is the "Process" tab. The "Build process file" which you select at the top, controls which "Build process parameters" you are shown at the bottom.

The key here is that we need to select a build process file that will restore NuGet packages before building as we are not checking our NuGet packages into TFS. The default build process file in my version of TFS is "DefaultTemplate.11.1.xaml"; this will not restore NuGet packages so it's no good for our purposes. In TFS Build Online there is a build process file called "TfvcTemplate.12.xaml" available which does restore NuGet packages before build, so I use this.

(In the end the build process file is simply a WF 4 workflow; if you want to you can customise it and even add WF activities that you have defined yourself)

2. Specify the solution(s) to build and unit test filters

This is where we set the Build process parameters:



In the "2.Build" section set the solutions to build as shown above.

Now, the test filters.  You may not want to use test filters of course, you may want to run all your unit tests. In my case I have an Azure Cloud Services application that uses Azure table storage for most of the backing storage. I therefore have (roughly) three kinds of unit tests:

  1. Tests that test the repository layer and therefore need a running Azure Storage Emulator to work.
  2. Tests that have some kind of dependency upon the environment being configured in a certain way (in production this is achieved by using Cloud Services Startup tasks).
  3. Tests that use a mock implementation for the repository layer and do not have any dependency on the environment other than using file system directories available through the TestContext object.

Tests of types (1) and (2) will always fail if run by TFS Build; how do I tell TFS Build that I only want to run tests of type (3)?  Answer: using the Test case filter property in my build process parameters (if you're using a different build process file the property may be called something different, but it will almost certainly be there somewhere). I indicate the tests that are of type (1) or (2) by decorating them with TestCategory attributes, as shown below:


        [TestMethod, TestCategory("RequiresStorage")]
        public void CreateAndRetrieveUser()

I give tests of type (1) a TestCategory of "RequiresStorage" and tests of type (2) a TestCategory of "RequiresInfrastructure". Then I set the Test case filter to:


TestCategory!=RequiresStorage&TestCategory!=RequiresInfrastructure

as shown above.  That's all!

Tuesday, 5 May 2015

Keeping NuGet packages out of TFS

When you start working with Visual Studio 2013 and TFS, by default Visual Studio adds your NuGet packages to TFS Source Control. This works OK but it's a bit backward; the whole point of having NuGet is that it acts as the repository for these packages so your source control system doesn't have to. And for one of my projects packages\ now contains 163 MB which is a lot of lard for source control to handle. So I decided to try and find out how to exclude the packages from TFS and keep them out.

I've read a few posts on how to do this:
So this is a step-by-step process of how to create a new solution and have (almost) all the packages kept out of TFS.

1. Create a .tfignore file

In the TFS folder that is the parent folder of all my projects I create a ".tfignore" file containing the following:


1
2
3
4
5
*.user
*.suo
bin
obj
packages

This instructs TFS to ignore any folder called "packages". Add the file to source control so that all your projects on all your dev machines get the benefit of this.

You would have thought that that is it, but it's not...

2. Create my new project by initially creating an empty solution file

The option to create a Blank Solution is in the "Other Project Types" section:


Close Visual Studio (if you don't do this, it won't work).

3. Add a "nuget.config" in a ".nuget" folder within the new solution

Create a ".nuget" folder within the solution folder and add a "nuget.config" file to it that contains the following:


1
2
3
4
5
<configuration>
    <solution>
        <add key="disableSourceControlIntegration" value="true" />
    </solution>
</configuration>

This is needed to stop NuGet.exe from trying to add packages to source control (it ignores the .tfignore file, a known issue). We want to do this before we add any projects so that NuGet uses our new settings for the packages in the initial project.

4. NOW add your projects to the solution...

In my case I added an MVC project and a Tests project.  I'm hoping that in the Pending Changes window I should see NOTHING being added from the "packages" folder even though my MVC project uses loads of packages.

And this is what I see:


Almost nothing. I can't seem to stop nuget / TFS (whichever one it is that is doing it) from adding the repositories.config to source control.  Fortunately this file is only about 200 bytes as opposed to the 163 MB I was putting in source control before.

If you use Git rather than TFS for source control then the only difference to this process would be that you would need to create a ".gitignore" instead of a ".tfignore" file.



Thursday, 23 April 2015

Packaging Complex Azure Cloud Services Projects

I've been working on an Azure Cloud Services Project for which the packaging process is a little complicated and thought it worth sharing how I've made this work. Although there's only a single Web Role and no Worker Roles, the Web Role:

  • Has two websites,
  • Hosts a Click Once project,
  • Hosts a downloadable zip file containing some PowerShell extensions.
The Click Once projects and the zip files are rebuilt each time the package is built. I'm only going to talk in this post about how the packaging occurs, that is, the process by which I create a cspkg and cscfg file; the deployment can be carried out using your favoured method (Visual Studio, TFS Build, PowerShell script) once you have created the package.

The websites are developed within the single Visual Studio 2013 solution MyApp.Server.sln and the Click Once and command line tools within another solution MyApp.Client.sln. These are located in the same folder in the file system / TFS as shown below:


The MyApp.Common project contains common functionality that is shared by all the client and server projects.

Extending the Packaging Process

All the processes that I know of that deploy Azure Cloud Services packages use MsBuild.exe to build the package itself. It would therefore be best if whatever we do to ensure that all the components of our Web Role end up in the package is something that will be triggered by MsBuild. We could try and do this the hard way by creating MsBuild tasks and customising the build process, but I went for the easy way: setting Pre-Build commands on the Cloud Services project.

Deploying Two Websites to a Single Web Role

After I've created a Web  Role for the first Web Project, MyApp.WebSite1, the ServiceDefinition.csdef file for my Cloud Service looks like this:


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="MyApp.CloudService" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition" schemaVersion="2014-06.2.4">
  <WebRole name="MyApp.WebSite1" vmsize="Small">
    <Sites>
      <Site name="Web">
        <Bindings>
          <Binding name="Main" endpointName="WebSite1" />
        </Bindings>
      </Site>
    </Sites>
    <Endpoints>
      <InputEndpoint name="WebSite1" protocol="http" port="80" />
    </Endpoints>
  </WebRole>
</ServiceDefinition>

To add my second Web Project to this role I need to manually edit this file, adding the following:


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="MyApp.CloudService" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition" schemaVersion="2014-06.2.4">
  <WebRole name="MyApp.WebSite1" vmsize="Small">
    <Sites>
      <Site name="Web">
        <Bindings>
          <Binding name="Main" endpointName="WebSite1" />
        </Bindings>
      </Site>
      <Site name="Web2" physicalDirectory=".\..\..\..\MyApp.WebSite2">
        <Bindings>
          <Binding name="Main" endpointName="WebSite2" />
        </Bindings>
      </Site>
    </Sites>
    <Endpoints>
      <InputEndpoint name="WebSite1" protocol="http" port="80" />
      <InputEndpoint name="WebSite2" protocol="http" port="82" />
    </Endpoints>
  </WebRole>
</ServiceDefinition>

(The \..\..\..\ in the "physicalDirectory" attribute is needed because the current directory at this point is "MyApp.CloudService\csx\Debug")

If I press F5 my two websites now fire up correctly in the Azure emulator.  But let's build a package to deploy to Azure and see what actually gets included. If I right click on the MyApp.CloudService project and select the "Package" option then the following is built:


Rename the .cspkg to a .zip, look inside it and we find:


Extract the .cssx file, rename it to a .zip, look inside it and we find (if we drill down):


You can see the problem here: although the site has been included in the package, all the source code files and other development artifacts are included instead of just the binaries, config and content. Worse still:

  1. The project is not built when we package our MyApp.CloudService project as it is not a dependency, so the build that we deploy is the previous build; this might be Debug rather than Release, and might have out of date binaries that do not include the latest changes.
  2. The web.config transforms are not applied, so the web.config that we deploy to production contains all our Debug settings.
So how can we fix the problem?  What I need is to deploy a copy of MyApp.WebSite2 that has been built using the correct configuration and then had the development artifacts removed and the web.config transforms applied.  As this is exactly what the "Publish" process does I will use a Pre-Build command on the Cloud Services project to publish MyApp.WebSite2 to a temporary directory and then point the .csdef at that temporary directory instead of the source directory. Before we can do this we need to set up a Publish Profile for MyApp.WebSite2 as shown below:



Note that we choose "Custom" not "Microsoft Azure Websites" because by "publish" here what we really mean is "build using my chosen configuration and apply the web.config transformations of my chosen configuration, copying over only the binaries, content and configuration".

Step 2 of the wizard:


At this step I enter the path to a temporary folder that is a sibling of the MyApp.WebSite2 project folder. The remaining steps of the wizard can be clicked through accepting the default settings. Next I edit the .csdef file to point to this temporary folder instead of the source folder:


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="MyApp.CloudService" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition" schemaVersion="2014-06.2.4">
  <WebRole name="MyApp.WebSite1" vmsize="Small">
    <Sites>
      <Site name="Web">
        <Bindings>
          <Binding name="Main" endpointName="WebSite1" />
        </Bindings>
      </Site>
      <Site name="Web2" physicalDirectory=".\..\..\..\MyApp.WebSite2.TempPublish">
        <Bindings>
          <Binding name="Main" endpointName="WebSite2" />
        </Bindings>
      </Site>
    </Sites>
    <Endpoints>
      <InputEndpoint name="WebSite1" protocol="http" port="80" />
      <InputEndpoint name="WebSite2" protocol="http" port="82" />
    </Endpoints>
  </WebRole>
</ServiceDefinition>

And add the following to the Pre-Build commands of the MyApp.CloudService project:


"%ProgramFiles(x86)%\MSBuild\12.0\Bin\msbuild.exe" "$(ProjectDir)..\MyApp.WebSite2\MyApp.WebSite2.csproj" /p:DeployOnBuild=true /p:PublishProfile="Pre Azure Publish" /p:VisualStudioVersion=12.0 /p:Configuration=$(ConfigurationName)

Most of the parameters are as you would expect, the only surprise is the "/p:VisualStudioVersion=12.0"; see this blog post on msbuild for the reason why this is needed.

Hit F5 and it starts up in the Azure emulator without any problems. Build a package, drill down into the .cspkg file and this time we find:



All the development artifacts have been removed, and if we look at the contents of the web.config file the transformations have been applied.

Building the Click Once Project

Before thinking about how we can ensure that the Click Once project is correctly deployed to Azure I need to give you some more details of how it is set up. The MyApp.ClickOnce project is configured to Publish to a folder within the MyApp.WebSite1 project using the Publish wizard as shown below:


In the next step we need to enter the Url that users will download the app from on our Cloud Services website:


In the next step I select whether the app can be launched independently of the website:


After the wizard has run and published the application I open the Properties of the MyApp.ClickOnce project and suppress the creation of the "publish.htm" file:


Finally I clear the option to auto-increment the version on each publish operation and ensure that the version number is set to 1.0.0.1:



I click "Publish Now" to carry out another publish operation with the new settings. To ensure that the app is deployed along with MyApp.WebSite1 I need to include it in the project and ensure that all the files are marked as "Content":



I also at this point ensure that these files are not source controlled by undoing the "add" operations in Team Explorer, Pending Changes. Note that Visual Studio is quite happy for files to be part of the project without being in TFS.

With this publishing set up the most recent build of MyApp.ClickOnce will be included in the Cloud Services package when it is built. To ensure that this build is up to date we need... another Pre Build command! I add the following to the Pre Build commands of MyApp.CloudService:


"%ProgramFiles(x86)%\MSBuild\12.0\Bin\msbuild.exe" "$(ProjectDir)..\MyApp.ClickOnce\MyApp.ClickOnce.csproj" /p:DeployOnBuild=true /p:VisualStudioVersion=12.0 /p:Configuration=$(ConfigurationName) /p:Platform=AnyCPU /target:publish /p:InstallUrl=http://www.mydomain.com/downloads/ClientApp/ "/p:PublishDir=$(ProjectDir)..\MyApp.WebSite1\downloads\ClientApp\\"

A few important things to note about this command:

  1. The "InstallUrl" and "PublishDir" properties need to be set via the command line even though we have already specified them when setting up publishing for MyApp.ClickOnce.
  2. The double trailing slash on the PublishDir is needed!
  3. If $(ProjectDir) contains any spaces (which will normally be the case as the default location for projects is "%USERPROFILE%\Documents\Visual Studio 2013\Projects") then this will not work; you will get an error message from msbuild that says

    MSBUILD : error MSB1008: Only one project can be specified.

    The only solution I found to this was to replace "$(ProjectDir).." in the PublishDir with the literal path using short folder names, like this:

"%ProgramFiles(x86)%\MSBuild\12.0\Bin\msbuild.exe" "$(ProjectDir)..\MyApp.ClickOnce\MyApp.ClickOnce.csproj" /p:DeployOnBuild=true /p:VisualStudioVersion=12.0 /p:Configuration=$(ConfigurationName) /p:Platform=AnyCPU /target:publish /p:InstallUrl=http://www.mydomain.com/downloads/ClientApp/ "/p:PublishDir=C:\Users\Alex\Documents\Visual~1\Projects\MyApp\MyApp.WebSite1\downloads\ClientApp\\"

It's a shame I have to do this, but at least it works! Every time I want to increment the version number of the Click Once app I have to ensure that I include the build outputs from the new version in the MyApp.WebSite1 project (and I'll probably want to exclude the outputs from the old version).

Zip File containing PowerShell Extensions

Outputs of my MyApp.PowerShell project are:


I want to end up with these outputs (less the .pdb files) in a zip file in the MyApp.WebSite1 project like this:


What's the best way to do this? You can't throw a brick in Linux without hitting a command line zip tool, but in Windows they're a bit less abundant, so I decided to use PowerShell, and added the following to the MyApp.CloudService Pre Build command line to build and zip the MyApp.PowerShell project:


1
2
"%ProgramFiles(x86)%\MSBuild\12.0\Bin\msbuild.exe" "$(ProjectDir)..\MyApp.PowerShell\MyApp.PowerShell.csproj" /p:VisualStudioVersion=12.0 /p:Configuration=$(ConfigurationName) /target:build
powershell -Command "[void] [System.Reflection.Assembly]::LoadFrom('$(ProjectDir)..\packages\SharpZipLib.0.86.0\lib\20\ICSharpCode.SharpZipLib.dll');$zip = [ICSharpCode.SharpZipLib.Zip.ZipFile]::Create('$(ProjectDir)..\MyApp.WebSite1\downloads\MyApp.PowerShell.zip');$zip.BeginUpdate();ls '$(ProjectDir)..\MyApp.PowerShell\bin\$(ConfigurationName)' -exclude *.pdb,*.xml | Foreach-Object {$zip.Add($_.FullName, $_.Name)};$zip.CommitUpdate();"

The second line isn't very readable in that format so here it is split on to multiple lines:


1
2
3
4
5
[void][System.Reflection.Assembly]::LoadFrom('$(ProjectDir)..\packages\SharpZipLib.0.86.0\lib\20\ICSharpCode.SharpZipLib.dll');
$zip = [ICSharpCode.SharpZipLib.Zip.ZipFile]::Create('$(ProjectDir)..\MyApp.WebSite1\downloads\MyApp.PowerShell.zip');
$zip.BeginUpdate();
ls '$(ProjectDir)..\MyApp.PowerShell\bin\$(ConfigurationName)' -exclude *.pdb,*.xml | Foreach-Object {$zip.Add($_.FullName, $_.Name)};
$zip.CommitUpdate();

For this to work you need to add the SharpZipLib package to one of the projects in the solution. You can use Microsoft's System.IO.Compression.FileSystem.dll assembly but this only allows you to zip whole folders so we would not be able to exclude the .pdb files.

Conclusion

Pre Build events are very useful for extending the Azure packaging process, or for extending any other Visual Studio publishing process.

The only final note to add is that if you wanted your project to start up slightly faster when debugging you could excluded some of the Pre Build commands from a Debug build by prefixing them with:


if /i "$(ConfigurationName)" == "Release" 

Enjoy!

Monday, 20 April 2015

How Backbone.Model.isNew works

I'm using Backbone in one of my current projects for a rich and responsive UI. When creating a new data item client side, I sometimes need to do a server call to get the data for the new model because setting the defaults in the new model requires logic that is only found on the server.  But (importantly) the model is still "new" at this point; it hasn't been persisted to the server.  So two server round trips are required to create the object:

It would be possible to create the model on the first call to the server and then update it on the second call, but that sets up a slightly different workflow in several ways:

  1. If the user wants to abandon the creation in step (3) we need to delete the model, or inform the user that they need to do this.
  2. If operations are auditing this would be audited as a create and an update rather than simply a create.
  3. This wouldn't work at all if some values can only be set at creation time.
So I have gone with the two stage "get defaults, commit" process shown above.

This leads to a problem; when I parse the model returned from the server at step 2 as a Backbone Model I find that Model.isNew() returns false. This means that when I commit the model through a Model.save() an "update" operation occurs instead of a "create"  Why is this?  Let's look at the definition of isNew from backbone.js:


    // A model is new if it has never been saved to the server, and lacks an id.
    isNew: function() {
      return !this.has(this.idAttribute);
    },

My model has the default value for idAttribute, "id". The id of my model on the server is a .NET
Guid, so when the server model is serialised to be returned in step 2 it will be serialised as:


{id: "00000000-0000-0000-0000-000000000000", title: null, anotherAtt: "Clever server logic set this"}

As this model does have a value for "id", Model.has("id") will return true and Model.isNew() will return false.
Help is at hand though: when I define my model I can simply override the isNew() function:


var MyModel = Backbone.Model.extend({
 isNew: function() {
  return !this.has(this.idAttribute) || this.id === '00000000-0000-0000-0000-000000000000';
 }
});

Or alternatively if I'm going to use this behaviour throughout my application I can modify this behaviour on the Model prototype:


    Backbone.Model.prototype.isNew = function () {
        return typeof this.attributes.id !== 'undefined' && this.attributes.id === '00000000-0000-0000-0000-000000000000';
    };

Either way, Backbone will now correctly distinguish between models that have been persisted and those that have not.