Thursday, 28 April 2016

Error WAT200: No default service configuration "ServiceConfiguration.cscfg" could be found in the project.


When deploying a cloud service / web role you may experience the following message:

2016-04-27T08:16:41.2709354Z ##[error]C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v14.0\Windows Azure Tools\2.9\Microsoft.WindowsAzure.targets(373,5): Error WAT200: No default service configuration "ServiceConfiguration.cscfg" could be found in the project.

On your VSTS build screen:

It means it is basically looking for a default cscfg file as we havent specified an alternative.

In my build I went and set the Target Profile to match the build configuration and this fixed the issue.

So it will be looking for one of the files shown below that match the currently building BuildConfiguration

And after those changes all builds perfectly:


Wednesday, 27 April 2016

Moving nuget package folder to a different location (1 level higher)


Note: thanks to this post by sebastian belczyk for some help:

This morning I need to relocate my nuget packages folder 1 level higher to allow an offshore team to work parallel with us. Basically, the original solution was cloaked and branched. In the original solution we realised we had 2 places for nuget packages so hence we had to merge them into 1 solution. This mean't that the branched solution need to match the original other wise we would have lots of reintegration issues.

Firstly, I needed to update my nuget.config file. We had our package located here:
But we really needed them here:
../../packages/ - up another level.

Then I needed to close the solution and reopen it for this change to take affect. I found this out as changing the file without restarting meant that nothing changed and my packages were still restoring to the old ../packages/ directory.

Then right click to Manage nuget Packages for Solution ...

Then Visual Studio will detect that there is no nuget packages folder on the file system at the location specified:

It will load them when you click restore:

So in summary, I had this file structure:
-- Project
---- packages Folder

And now I have this structure:
-- packages Folder
-- Project


Thursday, 7 April 2016

Move your IIS App pools from one machine to another


Today I got a new machine and didn't want to have to set my IIS up again.

So I ran the following from a command prompt run as administrator:

%windir%\system32\inetsrv\appcmd list apppool /config /xml > c:\apppools.xml

This created an xml file with all my app pools in it. I deleted  6 nodes that held the default iis app pools:

  • DefaultAppPool
  • Classic .NET AppPool
  • .NET v2.0 Classic
  • .NET v2.0
  • .NET v4.5 Classic
  • .NET v4.5
Then I went to my new machine after copying over the edited apppools.xml file and ran this command:

%windir%\system32\inetsrv\appcmd add apppool /in < c:\apppools.xml

And all my app pools were added:

To export all my sites I ran this:
%windir%\system32\inetsrv\appcmd list site /config /xml > c:\sites.xml
It exported all my sites to an xml file. I edited this and removed the default web site as it was already present on the destination machine.

Then I ran this on the destination machine:

 %windir%\system32\inetsrv\appcmd add site /in < c:\sites.xml

And it imported all my sites into IIS:

That is all,


Fix already installed nuget packages


Sometimes I create a branch and for some reason 1 or 2 projects packages and hence Dlls are wrong. I even try using "Restore Nuget Packages" at solution level from the context menu but this doesn't always fix it.

The best fix I have found is this:
Update-Package PackageName -ProjectName MyProject -reinstall

So if you have Entity Framework 6.1.2 installed and it got broken in a branch then you could run:

Update-Package EntityFramework -ProjectName MyProject -reinstall

And it will reinstall EF 6.1.2

That is all,


Friday, 29 January 2016

Service Fabric Reliable Actors and Reliable Services

Service Fabric Reliable Actors and Reliable Services

Note: This is a note to self.  


Actors are isolated, single-threaded components that encapsulate both state and behavior. They are similar to .NET objects, so they provide a natural programming model. Every actor is an instance of an actor type, similar to the way a .NET object is an instance of a .NET type. For example, an actor type may implement the functionality of a calculator, and many actors of that type could be distributed on various nodes across a cluster. Each such actor is uniquely identified by an actor ID.

Stateless actors

Stateless actors, which are derived from the StatelessActor base class, do not have any state that is managed by the Actors runtime. Their member variables are preserved throughout their in-memory lifetime, just as with any other .NET type. However, when they are garbage-collected after a period of inactivity, their state is lost. Similarly, the state can be lost due to failovers, which can occur during upgrades or resource-balancing operations, or as the result of failures in the actor process or its hosting node.
The following is an example of a stateless actor:
class HelloActor : StatelessActor, IHello
    public Task SayHello(string greeting)
        return Task.FromResult("You said: '" + greeting + "', I say: Hello Actors!");

Stateful actors

Stateful actors have a state that needs to be preserved across garbage collections and failovers. They derive from the StatefulActor, where TState is the type of the state that needs to be preserved. The state can be accessed in the actor methods via the State property on the base class.
The following is an example of a stateful actor accessing the state:
class VoicemailBoxActor : StatefulActor<VoicemailBox>, IVoicemailBoxActor
    public Task<List<Voicemail>> GetMessagesAsync()
        return Task.FromResult(State.MessageList);

Actor state is preserved across garbage collections and failovers when it's persisted it on disk and replicated across multiple nodes in the cluster. This means that, as with method arguments and return values, the actor state's type must be data contract serializable

Actor state providers

The storage and retrieval of the state are provided by an actor state provider. State providers can be configured per actor or for all actors within an assembly by the state provider specific attribute. When an actor is activated, its state is loaded in memory. When an actor method finishes, the Actors runtime automatically saves the modified state by calling a method on the state provider. If failure occurs during the Save operation, the Actors runtime creates a new actor instance and loads the last consistent state from the state provider.
By default, stateful actors use the key-value store actor state provider, which is built on the distributed key-value store provided by the Service Fabric platform. For more information, see the topic on state provider choices.

Reliable Services

Reliable Services gives you a simple, powerful, top-level programming model to help you express what is important to your application. With the Reliable Services programming model, you get:

  • For stateful services, the Reliable Services programming model allows you to consistently and reliably store your state right inside your service by using Reliable Collections. This is a simple set of highly available collection classes that will be familiar to anyone who has used C# collections. Traditionally, services needed external systems for Reliable state management. With Reliable Collections, you can store your state next to your compute with the same high availability and reliability you've come to expect from highly available external stores, and with the additional latency improvements that co-locating the compute and state provide.
  • A simple model for running your own code that looks like programming models you are used to. Your code has a well-defined entry point and easily managed lifecycle.
  • A pluggable communication model. Use the transport of your choice, such as HTTP with Web API, WebSockets, custom TCP protocols, etc. Reliable Services provide some great out-of-the-box options you can use, or you can provide your own.

What makes Reliable Services different?

Reliable Services in Service Fabric is different from services you may have written before. Service Fabric provides reliability, availability, consistency, and scalability.
  • Reliability--Your service will stay up even in unreliable environments where your machines may fail or hit network issues.
  • Availability--Your service will be reachable and responsive. (This doesn't mean that you can't have services that can't be found or reached from outside.)
  • Scalability--Services are decoupled from specific hardware, and they can grow or shrink as necessary through the addition or removal of hardware or virtual resources. Services are easily partitioned (especially in the stateful case) to ensure that independent portions of the service can scale and respond to failures independently. Finally, Service Fabric encourages services to be lightweight by allowing thousands of services to be provisioned within a single process, rather than requiring or dedicating entire OS instances to a single instance of a particular workload.
  • Consistency--Any information stored in this service can be guaranteed to be consistent (this applies only to stateful services - more on this later)

Stateless Reliable Services

A stateless service is one where there is literally no state maintained within the service, or the state that is present is entirely disposable and doesn't require synchronization, replication, persistence, or high availability.
For example, consider a calculator that has no memory and receives all terms and operations to perform at once.

Stateful Reliable Services

A stateful service is one that must have some portion of state kept consistent and present in order for the service to function. Consider a service that constantly computes a rolling average of some value based on updates it receives. To do this, it must have the current set of incoming requests it needs to process, as well as the current average. Any service that retrieves, processes, and stores information in an external store (such as an Azure blob or table store today) is stateful. It just keeps its state in the external state store.

When to use Reliable Services APIs

If any of the following characterize your application service needs, then you should consider Reliable Services APIs:
  • You need to provide application behaviour across multiple units of state (e.g., orders and order line items).
  • Your application’s state can be naturally modeled as Reliable Dictionaries and Queues.
  • Your state needs to be highly available with low latency access.
  • Your application needs to control the concurrency or granularity of transacted operations across one or more Reliable Collections.
  • You want to manage the communications or control the partitioning scheme for your service.
  • Your code needs a free-threaded runtime environment.
  • Your application needs to dynamically create or destroy Reliable Dictionaries or Queues at runtime.
  • You need to programmatically control Service Fabric-provided backup and restore features for your service’s state*.
  • Your application needs to maintain change history for its units of state*.
  • You want to develop or consume third-party-developed, custom state providers*.

Comparing the Reliable Actors API and the Reliable Services API

When to choose Reliable Actors APIWhen to choose Reliable Services API
Your problem space involves a large number (1000+) of small, independent units of state and logic.You need to maintain logic across multiple components.
You want to work with single-threaded objects that do not require significant external interaction.You want to use Reliable Collections (like .NET Reliable Dictionary and Reliable Queue) to store and manage your state.
You want the platform to manage communication for you.You want to manage communication and control the partitioning scheme for your service.

Keep in mind that it is perfectly reasonable to use different frameworks for different services within your app. For instance, you might have a stateful service that aggregates data that is generated by a number of actors.
That's all for a high level summary.



Friday, 11 December 2015

How to install TCP / Named Pipes and some IIS bindings on a server


Thought I would share this as it is very useful for installing server features and setting up IIS for your binding needs.

Here is the PowerShell script:

Here are the server features before the script runs:

IIS Before:

The script running:

The server after:

IIS After:

Named Pipes and TCP Listeners installed and running:


Thanks for listening,


Tuesday, 29 September 2015

Generate Entity Framework update scripts from migrations

This is how you generate Entity Framework update scripts from migrations.

Note: this is a very simplified post that doesn't generate a very complicated database script.

So you already have an initial database migration in your project. If you don't go Google how to get started.

I'll start by generating an SQL script for my initial migration.

Here is part of my initial migration in C#:

I will now generate the script for this but running this command in the Package Manager Console:
Update-Database -Script -SourceMigration: $InitialDatabase -TargetMigration: Initial

Make sure you select the correct Default Project in the dropdown shown in the above picture.

Here is the SQL script:

Now I will update my model with a new property:

I then ran the following to create my new C# migration:
Add-Migration AddedAProperty -StartUpProjectName User.DbResourceAccess

Which created this new C# file:

Next I will run this:
Update-Database -Script -SourceMigration: $InitialDatabase -TargetMigration: AddedAProperty
Which created the following script:

You could then apply this to a production database for example.
I'm not sure you would want to insert into a __MigrationHistory table on production though.


Monday, 28 September 2015

Deployment considerations in Azure - Cloud Services

Manage Deployments in Azure

Note: this is a work in progress

Staging area is not designed to be a "QA" environment but only a holding-area before production is deployed.
You should open up a new service for Testing environment with its own Prod/Staging. In this case, you will want to maintain multiple configuration file sets, one set per deployment environment (Production, Testing, etc.)
Staging is a temporary deployment slot used mainly for no-downtime upgrades and ability to roll back an upgrade.
Azure provides production and staging environments within which you can create a service deployment. When a service is deployed to either the production or staging environments, a single public IP address, known as a virtual IP address (VIP), is assigned to the service in that environment. The VIP is used for all input endpoints associated with roles in the deployment. Even if the service has no input endpoints specified in the model, the VIP is still allocated and used as the source address assigned to outbound traffic coming from each role.

What happens when a service is promoted from staging to production?

Typically a service is deployed to the staging environment to test it before deploying the service to the production environment. When it is time to promote the service in staging to the production environment, you can do so without redeploying the service. This can be done by swapping the deployments.
The deployments can be swapped by calling the Swap Deployment Service Management API or by swapping the VIPs in the portal, which result in the same underlying operation on the hosted service. For more information on swapping the VIPs, see How to Manage Cloud Services.
Screen shot from Azure Portal - 2015-09-28

When the service is deployed, a VIP is assigned to the environment to which is it is deployed. In the case of the production environment, the service can be accessed by the URL,, or by the VIP. When a service is deployed to the staging environment, a VIP is assigned to the staging environment and the service can be accessed by a URL,, or by the assigned VIP. The assigned VIPs can be viewed in the portal or by calling the Get Deployment Service Management API.
When the service is promoted to production, the VIP and URL that were assigned to the production environment are assigned to the deployment that is currently in the staging environment, thus “promoting” the service to production. The VIP and URL assigned to the staging environment are assigned to the deployment that was in the production environment.
It is important to remember that neither the production public IP address nor the service URL changes during the promotion.
To examine how this works, we can illustrate a scenario in which there is a Deployment A deployed to the production environment. Additionally, there is a Deployment B deployed to the staging environment. The following table illustrates VIPs after the initial deployment of the services to production and staging:


Deployment A
Deployment B
Once the Deployment B is promoted to production the VIPs are as follows:
Deployment B
Deployment A
When the deployments are swapped, the deployment in the production environment that was associated with the production VIP and URL is now associated with the staging VIP. Likewise, the deployment in the staging environment that was associated with the staging VIP and URL is now associated with the production VIP.

Only new incoming connections are connected to the newly promoted service. Existing connections are not swapped during a deployment swap.

Persistence of VIPs in Windows Azure

Throughout the lifetime of a deployment, the VIP assigned will not change, regardless of the operations on the deployment, including updates, reboots, and reimaging the OS. The VIP for a given deployment will persist until that deployment is deleted. When a customer swaps the VIP between a stage and production deployment in a single hosted service, both deployment VIPs are persisted. A VIP is associated with the deployment and not the hosted service. When a deployment is deleted, the VIP associated with that deployment will return to the pool and be re-assigned accordingly, even if the hosted service is not deleted. Windows Azure currently does not support a customer reserving a VIP outside of the lifetime of a deployment.

Managing ASP.NET machine keys for IIS

Azure automatically manages the ASP.NET machineKey for services deployed using IIS. If you routinely use the VIP Swap deployment strategy, you should manually configure the ASP.NET machine keys. For information on configuring the machine key, see Configuring Machine Keys in IIS 7.

More on machine keys later ...

Questions for later:

Is there any way to deploy different instance sizes for test/production

Note that the image above shows multiple cscfg files, but only one csdef file. The cscfg file has the role names, instance counts, configuration values, and so on. The one csdef file is used with whichever configuration you select when you publish. It has a list of all of the configuration settings (but not the values), setup tasks (if applicable), the size of the VM to be used, and so on. The value you want to especially note is the VM size.

Using this methodology of multiple configuration files in one cloud project, you only have one place to set the size of the VM regardless of whether you are publishing to staging or production. You may not want to use the same sizes for staging and production, especially if you are using medium or larger VMs in production and small VMs in staging. In that case, you either have to change this every time you publish, or you have to have another solution.

Note: See the heading "Multiple cloud projects with their own configuration settings":

Friday, 18 September 2015

See your Azure VM deployment succeed or fail!


Yesterday we were having deployment issues due to an Azure WebRole startup task (more on that in my next post.)

We rolled back the changes and all was fine but I wanted to find the information that was logged on the server so I can trouble shoot in the future if it happens again. 

I just did a fresh deployment as a base line to prove that I was working with a successful deployment.

As it was deploying I could see log records appearing in here when logged onto my Cloud Service VM:

As this was a successful deployment I could see messages in the above mentioned Windows Azure Event log showing that nothing went wrong.

I could see the log message stating that the web site installed into IIS successfully.
I could see the successful OnStart() and OnRun() events.

Here are some screen shots:

Note that if we had diagnostics turned on we could probably see the same information inside the visual studio server explorer for our cloud service.

Not very useful when everything goes well. Ill post more when and if I get a failed deployment.


Tuesday, 30 June 2015

An alternative Way to Remotely Debug Azure Web Sites


I have been having trouble connecting to my Azure instances with the normal attach to debugger method:

This never works for me even when debugging in enabled.

Here is a link to a way that works and it worked the first time.

I only tested with an Azure Web App so not sure about WCF and services yet.
(More on this later)


Thursday, 9 April 2015

Stop an AzureVM using Azure Automation with a schedule


UPDATE: The script mentioned in this post is now here:

In this blog I will show you how to use Azure Automation to schedule a Powershell script to stop and deallocate a VM running in Azure. 

The reason I am blogging this is because I have spent a couple of days looking at other people's blogs and the information seems to not be quite correct. In particular, the need to use a self signed certificate from your Azure box is no longer required.

The reason you might want to do this is to save some money as when your Azure VM is stopped and deallocated, you will not be charged for it.

Firstly, I created a VM to play with called tempVMToStop as follows:

It required a username and password so I used my name. 

Once you have the VM you can remote desktop to it using the link at the bottom of the Azure portal and the username and password created in the previous step.

The next step is to add our automation script.

Now we go to automation in Azure:

Remember the goal of this blog is to automatically stop the following VM:
first we will need to create a user that is allowed to run our automation in Azure Active Directory as shown here:

Create the user to be used for automation:

Then go back into the automation section and choose Assets:

and add the automation user you just created here:

This is reasonably new as before you needed to create a self signed certificate on your VM and import the pfx file into an Asset => Credential but this is no longer needed.

Now go to the automationDemo and then choose Runbooks:

Click to create a new runbook:

Once it is created click on Author and write your script as follows:
workflow tempVMToStopRunBook

    # Specify Azure Subscription Name
    $subName = 'XXX- Base Visual Studio Premium with MSDN' 
    $cred = Get-AutomationPSCredential -Name "automationuser"
    Add-AzureAccount -Credential $cred
    Select-AzureSubscription -SubscriptionName $subName 

    $vm = Get-AzureVM -ServiceName $cloudServiceName -Name $vmName 
    Write-Output "VM NAME: $vm"
    Write-Output "vm.InstanceStatus: $vm.InstanceStatus"
    if $vm.InstanceStatus -eq 'ReadyRole' ) {
        Stop-AzureVM -ServiceName $vm.ServiceName -Name $vm.Name -Force    


Note that the subscriptionname shown as  XXX - Base Visual Studio Premium with MSDN will need to be replaced by your subscription.

Also the workflow class name must be the same as the runbook name.

Save it and then you can choose to test it or just publish it. 
I will skip to publish as I have already tested it.

Once it is published you can click start and enter the 2 param names that the script is expecting:



Now we want to see that our VM stops so here was mine before:

Once you run it you will see some output when you click Jobs in the runbook:

And then if you look back at your VM it should be stopped:

Note that as we are totally deallocating the resources, the next time you start it up, it will get an new IP address but this will be all given to you in the VM section in your portal.

The next step is to obviously schedule what we just did and also schedule a start script so we could, for example, stop our VM at the end of a business day and start it in the morning at 8am so it is ready for us to use. 

This will save some money as the VM will not be using resources overnight.

Go back to the root of your automation and add a new asset for your schedule:

Here's one I created that will run the Power Shell script we created every day:

That's all there is to it. 

Note that I am no expert on Azure automation so all comments and constructive criticism are welcome.