Monday, 28 November 2016

Monitor your Azure API Management Instance with PowerBI

Monitor your Azure API Management Instance with PowerBI


This document outlines the steps involved to monitor your Azure API management instance with PowerBI.

Note: I will refer to Azure API management as APIM in this document.

Steps will include:

  • Add a logger, using the APIM REST API, to your APIM instance to send events to an event hub
  • Set up a Stream Analytics job - It consists of one or more input data sources, a query expressing the data transformation, and one or more output targets that results are written to. Together these enable the user to perform data analytics processing for streaming data scenarios
  • Build a PowerBI dashboard to see your APIM data in a format that suits your business requirements.

Adding a logger to APIM

First thing you need to do is add a logger to your APIM instance using the APIM REST API. I will use Postman to do this.

Firstly, in your APIM instance, enable the REST API:

Secondly, go to the bottom of the security page where you enabled the REST API and generate a shared access key:

You will use this later in Postman after we first create an Azure Event hub.

Create an Event Hub

Go into your Azure portal and create an event hub.

Ensure you create to Event Hub shared access policies. 1 for sending to the event hub and 1 for receiving. This is to allow you more granular control over your hub.

Creating the logger in Postman

Now that you have an Event Hub, the next step is to configure a Logger in your API Management service so that it can log events to the Event Hub.
API Management loggers are configured using the API Management REST API
To create a logger, make an HTTP PUT request using the following URL template.

https://{your service}{new logger name}?api-version=2014-02-14-preview
Replace {your service} with the name of your API Management service instance.
Replace {new logger name} with the desired name for your new logger. You will reference this name when you configure the log-to-eventhub policy.

Add the following headers to the request.
Specify the request body using the following template.

  "type" : "AzureEventHub",
  "description" : "Sample logger description",
  "credentials" : {
    "name" : "Name of the Event Hub from the Azure Classic Portal",
    "connectionString" : "Endpoint=Event Hub Sender connection string"
 Here is mine:

You will see that it returned 201 Created which means we now have a logger.

Now we go into APIM and add a policy on the built in echo API.

This will send event to our event hub when the


OperationRetrieve resource 

API is hit.

View the Event Hub Events

If you want to view the event hub events for your own sanity check then download Service Bus Explorer and listen to your event hub:

You will notice that the event hub data shown in the listener is the same data written by our APIM policy.

Create Stream Analytics Job to send data to PowerBI

Next, go to your Azure portal and create a new stream analytics job.

Once it is created then create an input:

And a PowerBI output: Note that you will be prompted to authorize your PowerBI account.

Also create a query to transform the data. My query doesnt do anything special:

Then start your stream analytics job.

Now go back to your APIM instance and hit the API end point a few times. Make sure it is the API operation 
with the policy on it.

Also, change param1 and param2 on the operation to a few different values so we get somewhat useful data:

Now look at the trace for that operation in APIM and you will see the log to event hub event has fired:

Now log into PowerBI on the web and you will see (hopefully) a new streaming dataset:

Create a dashboard

We will create a PowerBI dashboard to visualise our APIM data using param1 and param2

I will drag a pie chart onto the workspace and set the following: All I did was add param2 as a count. As you can see we get a great visualisation of the number of times param2 was used on the APIM operation.

So you can see that I set a request with param2=5 a lot more times that I did for other calls.

Obviously you can use your imagination as to what you can use this for.

Here we see a Tree Map, Pie Chart and Funnel displaying data from my APIM. The funnel shows distinct calls from IP address.


Thursday, 28 April 2016

Error WAT200: No default service configuration "ServiceConfiguration.cscfg" could be found in the project.


When deploying a cloud service / web role you may experience the following message:

2016-04-27T08:16:41.2709354Z ##[error]C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v14.0\Windows Azure Tools\2.9\Microsoft.WindowsAzure.targets(373,5): Error WAT200: No default service configuration "ServiceConfiguration.cscfg" could be found in the project.

On your VSTS build screen:

It means it is basically looking for a default cscfg file as we havent specified an alternative.

In my build I went and set the Target Profile to match the build configuration and this fixed the issue.

So it will be looking for one of the files shown below that match the currently building BuildConfiguration

And after those changes all builds perfectly:


Wednesday, 27 April 2016

Moving nuget package folder to a different location (1 level higher)


Note: thanks to this post by sebastian belczyk for some help:

This morning I need to relocate my nuget packages folder 1 level higher to allow an offshore team to work parallel with us. Basically, the original solution was cloaked and branched. In the original solution we realised we had 2 places for nuget packages so hence we had to merge them into 1 solution. This mean't that the branched solution need to match the original other wise we would have lots of reintegration issues.

Firstly, I needed to update my nuget.config file. We had our package located here:
But we really needed them here:
../../packages/ - up another level.

Then I needed to close the solution and reopen it for this change to take affect. I found this out as changing the file without restarting meant that nothing changed and my packages were still restoring to the old ../packages/ directory.

Then right click to Manage nuget Packages for Solution ...

Then Visual Studio will detect that there is no nuget packages folder on the file system at the location specified:

It will load them when you click restore:

So in summary, I had this file structure:
-- Project
---- packages Folder

And now I have this structure:
-- packages Folder
-- Project


Thursday, 7 April 2016

Move your IIS App pools from one machine to another


Today I got a new machine and didn't want to have to set my IIS up again.

So I ran the following from a command prompt run as administrator:

%windir%\system32\inetsrv\appcmd list apppool /config /xml > c:\apppools.xml

This created an xml file with all my app pools in it. I deleted  6 nodes that held the default iis app pools:

  • DefaultAppPool
  • Classic .NET AppPool
  • .NET v2.0 Classic
  • .NET v2.0
  • .NET v4.5 Classic
  • .NET v4.5
Then I went to my new machine after copying over the edited apppools.xml file and ran this command:

%windir%\system32\inetsrv\appcmd add apppool /in < c:\apppools.xml

And all my app pools were added:

To export all my sites I ran this:
%windir%\system32\inetsrv\appcmd list site /config /xml > c:\sites.xml
It exported all my sites to an xml file. I edited this and removed the default web site as it was already present on the destination machine.

Then I ran this on the destination machine:

 %windir%\system32\inetsrv\appcmd add site /in < c:\sites.xml

And it imported all my sites into IIS:

That is all,


Fix already installed nuget packages


Sometimes I create a branch and for some reason 1 or 2 projects packages and hence Dlls are wrong. I even try using "Restore Nuget Packages" at solution level from the context menu but this doesn't always fix it.

The best fix I have found is this:
Update-Package PackageName -ProjectName MyProject -reinstall

So if you have Entity Framework 6.1.2 installed and it got broken in a branch then you could run:

Update-Package EntityFramework -ProjectName MyProject -reinstall

And it will reinstall EF 6.1.2

That is all,


Friday, 29 January 2016

Service Fabric Reliable Actors and Reliable Services

Service Fabric Reliable Actors and Reliable Services

Note: This is a note to self.  


Actors are isolated, single-threaded components that encapsulate both state and behavior. They are similar to .NET objects, so they provide a natural programming model. Every actor is an instance of an actor type, similar to the way a .NET object is an instance of a .NET type. For example, an actor type may implement the functionality of a calculator, and many actors of that type could be distributed on various nodes across a cluster. Each such actor is uniquely identified by an actor ID.

Stateless actors

Stateless actors, which are derived from the StatelessActor base class, do not have any state that is managed by the Actors runtime. Their member variables are preserved throughout their in-memory lifetime, just as with any other .NET type. However, when they are garbage-collected after a period of inactivity, their state is lost. Similarly, the state can be lost due to failovers, which can occur during upgrades or resource-balancing operations, or as the result of failures in the actor process or its hosting node.
The following is an example of a stateless actor:
class HelloActor : StatelessActor, IHello
    public Task SayHello(string greeting)
        return Task.FromResult("You said: '" + greeting + "', I say: Hello Actors!");

Stateful actors

Stateful actors have a state that needs to be preserved across garbage collections and failovers. They derive from the StatefulActor, where TState is the type of the state that needs to be preserved. The state can be accessed in the actor methods via the State property on the base class.
The following is an example of a stateful actor accessing the state:
class VoicemailBoxActor : StatefulActor<VoicemailBox>, IVoicemailBoxActor
    public Task<List<Voicemail>> GetMessagesAsync()
        return Task.FromResult(State.MessageList);

Actor state is preserved across garbage collections and failovers when it's persisted it on disk and replicated across multiple nodes in the cluster. This means that, as with method arguments and return values, the actor state's type must be data contract serializable

Actor state providers

The storage and retrieval of the state are provided by an actor state provider. State providers can be configured per actor or for all actors within an assembly by the state provider specific attribute. When an actor is activated, its state is loaded in memory. When an actor method finishes, the Actors runtime automatically saves the modified state by calling a method on the state provider. If failure occurs during the Save operation, the Actors runtime creates a new actor instance and loads the last consistent state from the state provider.
By default, stateful actors use the key-value store actor state provider, which is built on the distributed key-value store provided by the Service Fabric platform. For more information, see the topic on state provider choices.

Reliable Services

Reliable Services gives you a simple, powerful, top-level programming model to help you express what is important to your application. With the Reliable Services programming model, you get:

  • For stateful services, the Reliable Services programming model allows you to consistently and reliably store your state right inside your service by using Reliable Collections. This is a simple set of highly available collection classes that will be familiar to anyone who has used C# collections. Traditionally, services needed external systems for Reliable state management. With Reliable Collections, you can store your state next to your compute with the same high availability and reliability you've come to expect from highly available external stores, and with the additional latency improvements that co-locating the compute and state provide.
  • A simple model for running your own code that looks like programming models you are used to. Your code has a well-defined entry point and easily managed lifecycle.
  • A pluggable communication model. Use the transport of your choice, such as HTTP with Web API, WebSockets, custom TCP protocols, etc. Reliable Services provide some great out-of-the-box options you can use, or you can provide your own.

What makes Reliable Services different?

Reliable Services in Service Fabric is different from services you may have written before. Service Fabric provides reliability, availability, consistency, and scalability.
  • Reliability--Your service will stay up even in unreliable environments where your machines may fail or hit network issues.
  • Availability--Your service will be reachable and responsive. (This doesn't mean that you can't have services that can't be found or reached from outside.)
  • Scalability--Services are decoupled from specific hardware, and they can grow or shrink as necessary through the addition or removal of hardware or virtual resources. Services are easily partitioned (especially in the stateful case) to ensure that independent portions of the service can scale and respond to failures independently. Finally, Service Fabric encourages services to be lightweight by allowing thousands of services to be provisioned within a single process, rather than requiring or dedicating entire OS instances to a single instance of a particular workload.
  • Consistency--Any information stored in this service can be guaranteed to be consistent (this applies only to stateful services - more on this later)

Stateless Reliable Services

A stateless service is one where there is literally no state maintained within the service, or the state that is present is entirely disposable and doesn't require synchronization, replication, persistence, or high availability.
For example, consider a calculator that has no memory and receives all terms and operations to perform at once.

Stateful Reliable Services

A stateful service is one that must have some portion of state kept consistent and present in order for the service to function. Consider a service that constantly computes a rolling average of some value based on updates it receives. To do this, it must have the current set of incoming requests it needs to process, as well as the current average. Any service that retrieves, processes, and stores information in an external store (such as an Azure blob or table store today) is stateful. It just keeps its state in the external state store.

When to use Reliable Services APIs

If any of the following characterize your application service needs, then you should consider Reliable Services APIs:
  • You need to provide application behaviour across multiple units of state (e.g., orders and order line items).
  • Your application’s state can be naturally modeled as Reliable Dictionaries and Queues.
  • Your state needs to be highly available with low latency access.
  • Your application needs to control the concurrency or granularity of transacted operations across one or more Reliable Collections.
  • You want to manage the communications or control the partitioning scheme for your service.
  • Your code needs a free-threaded runtime environment.
  • Your application needs to dynamically create or destroy Reliable Dictionaries or Queues at runtime.
  • You need to programmatically control Service Fabric-provided backup and restore features for your service’s state*.
  • Your application needs to maintain change history for its units of state*.
  • You want to develop or consume third-party-developed, custom state providers*.

Comparing the Reliable Actors API and the Reliable Services API

When to choose Reliable Actors APIWhen to choose Reliable Services API
Your problem space involves a large number (1000+) of small, independent units of state and logic.You need to maintain logic across multiple components.
You want to work with single-threaded objects that do not require significant external interaction.You want to use Reliable Collections (like .NET Reliable Dictionary and Reliable Queue) to store and manage your state.
You want the platform to manage communication for you.You want to manage communication and control the partitioning scheme for your service.

Keep in mind that it is perfectly reasonable to use different frameworks for different services within your app. For instance, you might have a stateful service that aggregates data that is generated by a number of actors.
That's all for a high level summary.