Quantcast
Channel: Andrei Dzimchuk
Viewing all 60 articles
Browse latest View live

Generating clients for your APIs with AutoRest

$
0
0
When building Web APIs it's often required to provide client adapters between various programming stacks and raw HTTP REST APIs. These 'clients' can be built manually but it's often a rather tedious task and it adds to your development efforts as you need to keep the clients in sync with your services as you evolve them. There had to be a better way and in fact Microsoft faced this issue when they had to generate clients for various Azure REST APIs to be used in various stacks such as .NET, Node, Ruby, Java and Python. They've created and open sourced a tool called [AutoRest](https://github.com/Azure/autorest) that can generate client side code from the Swagger document describing your service. Let's have a look! ### Swagger Remember WSDL? [Swagger](http://swagger.io/) is something that has taken its place in the RESTful world. It's a spec for the JSON document describing your REST APIs including paths (resources), operations (verbs), parameters and responses and of course representations. Currently it's at version 2.0 and is being widely adopted as it enables interoperability between various services and software stacks. #### Enabling Swagger doc in ASP.NET Core For ASP.NET Web API the most popular library that brings Swagger documentation has been [Swashbuckle](https://github.com/domaindrivendev/Swashbuckle). It registers an endpoint that triggers generation of the document off of the running services. Internally it relies on reflection, API description services, custom attributes and filters and even XML comments. The end result is a JSON document that complies with the [Swagger spec](http://swagger.io/specification/). Swashbuckle is pretty extensible and allows you to affect the way literally any portion of the document will look like so long as it's still within the spec. There is a work-in-progress [version](https://github.com/domaindrivendev/Ahoy) of Swashbuckle for ASP.NET Core and its package is available though [NuGet](https://www.nuget.org/packages/Swashbuckle.SwaggerGen/6.0.0-rc1-final). Once you have installed the `Swashbuckle.SwaggerGen` package it's time to configure the generator. ``` public void ConfigureServices(IServiceCollection services) { services.AddSwaggerGen(); services.ConfigureSwaggerSchema(options => { options.DescribeAllEnumsAsStrings = true; }); services.ConfigureSwaggerDocument(options => { options.SingleApiVersion(new Swashbuckle.SwaggerGen.Info { Title = "Book Fast API", Version = "v1" }); }); } ``` `ConfigureSwaggerSchema` among other properties allows you to register model filters which you can use to adjust the way documentation is generated for your representations. `ConfigureSwaggerDocument' allows you to register operation and document filters that will fine tune documentation of individual operations or even the whole document. Model, operation and document filters are the main extensibility points of Swashbuckle. In our case we just provided a short description of the API and also specified that we want enums to be documented rather than their values. Now we have to add a Swashbuckle middleware to the request pipeline that will handle requests to a special configurable documentation endpoint: ``` public void Configure(IApplicationBuilder app) { app.UseIISPlatformHandler(); app.UseMvc(); app.UseSwaggerGen("docs/{apiVersion}"); } ``` If we don't specify the route Swashbuckle will use the default `swagger/{apiVersion}/swagger.json`. If you launch the app and hit the specified route we should get a JSON document in response. It's a valid Swagger 2.0 document albeit not ideal. Things to watch out for: - Operation identifiers are quite ugly as they are formed by concatenating your controller and action names together with HTTP verbs and parameters. AutoRest uses operation identifiers to derive method names for your client interfaces so you want to make sure you control these identifiers. - All responses include default 200 only even though your actions may return 201 or 204 as success code and chance are they can produce some 40x. - If you return IActionResult rather than an actual representation the response won't contain a reference to the corresponding schema. And you will retrun IActionResult from at least your POST and DELETE methods. - `produces` properties of the operations are empty and you probably want to include content types that your API supports (e.g. `application/json`). - Parameters and properties in your representations are lacking descriptions and while this may not be such an issue for you, wouldn't it be nice if those descriptions were included as XML comments in generated classes? Here's what a POST operation from my `BookingController` would look like: ``` "/api/accommodations/{accommodationId}/bookings": { "post": { "tags": ["Booking"], "operationId": "ApiAccommodationsByAccommodationIdBookingsPost", "produces": [], "parameters": [{ "name": "accommodationId", "in": "path", "required": true, "type": "string" }, { "name": "bookingData", "in": "body", "required": false, "schema": { "$ref": "#/definitions/BookingData" } }], "responses": { "200": { "description": "OK" } }, "deprecated": false } } ``` Let's fix these issues! #### Getting better documentation with Swashbuckle attributes and filters Remember that `AddSwaggerGen` call? Beyond anything else it registers default operation filters that will handle special Swashbuckle attributes that you can use to control operation identifiers and responses. The attributes are: `SwaggerOperation`, `SwaggerResponse` and `SwaggerResponseRemoveDefaults`. Let's have a look at what our POST method could look like once decorated with aforementioned attributes: ``` [HttpPost("api/accommodations/{accommodationId}/bookings")] [SwaggerOperation("create-booking")] [SwaggerResponseRemoveDefaults] [SwaggerResponse(System.Net.HttpStatusCode.Created, Type = typeof(BookingRepresentation))] [SwaggerResponse(System.Net.HttpStatusCode.BadRequest, Description = "Invalid parameters")] [SwaggerResponse(System.Net.HttpStatusCode.NotFound, Description = "Accommodation not found")] public async Task Create([FromRoute]Guid accommodationId, [FromBody]BookingData bookingData) { try { if (ModelState.IsValid) { var booking = await service.BookAsync(accommodationId, mapper.MapFrom(bookingData)); return CreatedAtAction("Find", mapper.MapFrom(booking)); } return HttpBadRequest(); } catch (AccommodationNotFoundException) { return HttpNotFound(); } } ``` Even though I've chosen a dash style for my operations identifiers (i.e. `create-booking`) AutoRest will actually generate a method called `CreateBooking` in my client interface which is very nice! I also specified that upon success the operation will return 201 and the Swagger document should include a reference to `BookingRepresentation` in the 201 response. I had to remove the default 200 response with `SwaggerResponseRemoveDefaults` attribute. I also included a 404 response with an appropriate description. Please note that HTTP status codes are actually keys in the dictionary of responses within an operation and thus there can be only one response with a particular status code. If you have multiple 404's you will need to come up with a combined description in `SwaggerResponse` attribute. So far so good but let's address the missing content type issue. One way to do that is to add a custom operation filter that will add supported content types to all of our operations: ``` internal class DefaultContentTypeOperationFilter : IOperationFilter { public void Apply(Operation operation, OperationFilterContext context) { operation.Produces.Clear(); operation.Produces.Add("application/json"); } } ``` As it was mentioned above operation filters are added in `ConfigureSwaggerDocument` so let's do that: ``` services.ConfigureSwaggerDocument(options => { options.SingleApiVersion(new Swashbuckle.SwaggerGen.Info { Title = "Book Fast API", Version = "v1" }); options.OperationFilter(); }); ``` #### Getting even better documentation with XML comments Swashbuckle can also extract XML comments that you can add to your action methods as well as to models. XML comments are extracted by default but you need to enable emission of build artifacts in by going to your MVC project's Properties and selecting 'Produce output on build' option on the Build page. ![ASP.NET Core app build properties page](//az777544.vo.msecnd.net/blog-content/AspNetCoreProduceArtifacts.png) By default the artifacts (.dll, .pdb and the desired .xml) will be put into 'artifacts' folder in your solution under corresponding project, build configuration and framework type folders. When you publish and choose to create NuGet packages for your code the artifacts will be in approot\packages\{YourProjectName}\{PackageVersion}\lib\{FrameworkType} folder. Why is this important? Because you need to provide a path to the XML file to Swashbuckle and with ASP.NET Core these paths are going to be different depending on whether you just locally build or publish. This [configuration code](https://github.com/dzimchuk/book-fast-api/blob/master/src/BookFast.Api/Swagger/SwaggerExtensions.cs) will work with local builds but not with published apps and it has to be used in development environment only. Moreover it's not compatible with RC2 bits of ASP.NET Core. But we seem to be moving away from the topic of this post. Anyway, once we have decorated our code with nice XML comments let's have a look at the final version for the POST Booking operation documentation: ``` "/api/accommodations/{accommodationId}/bookings": { "post": { "tags": ["Booking"], "summary": "Book an accommodation", "operationId": "create-booking", "produces": ["application/json"], "parameters": [{ "name": "accommodationId", "in": "path", "description": "Accommodation ID", "required": true, "type": "string" }, { "name": "bookingData", "in": "body", "description": "Booking details", "required": false, "schema": { "$ref": "#/definitions/BookingData" } }], "responses": { "201": { "description": "Created", "schema": { "$ref": "#/definitions/BookingRepresentation" } }, "400": { "description": "Invalid parameters" }, "404": { "description": "Accommodation not found" } }, "deprecated": false } } ``` Now we're talking! Much better than the initial version. Let's go generate the client! ### AutoRest You can install AutoRest with Chocolatey or simply grab a package from NuGet and unpack it somewhere. Then you need to request a Swagger document from your service and save it. Now you're ready to run AutoRest: ``` f:\dev\tools\AutoRest>AutoRest.exe -Namespace BookFast.Client -CodeGenerator CSharp -Modeler Swagger -Input f:\book-fast-swagger.json -PackageName BookFast.Client -AddCredentials true The Microsoft.Rest.ClientRuntime.2.1.0 nuget package is required to compile the generated code. Finished generating CSharp code for f:\book-fast-swagger.json. ``` [Here](https://github.com/Azure/autorest/blob/master/Documentation/cli.md) you can find a complete documentation for command line parameters. I chose C# generator but AutoRest also supports Java, Node, Python and Ruby. In order to build the generated code you also need to add `Microsoft.Rest.ClientRuntime` NuGet package that brings all the necessary plumbing. #### Exploring generated client code AutoRest generated classed for my representations together with `IBookFastAPI` interface and the corresponding implementation class. All operations are declared as asynchronous and I can also control Json.NET serializer settings. Let's have a look at the POST Booking contract: ``` /// /// Book an accommodation /// /// /// Accommodation ID /// /// /// Booking details /// /// /// The headers that will be added to request. /// /// /// The cancellation token. /// Task> CreateBookingWithHttpMessagesAsync( string accommodationId, BookingData bookingData = default(BookingData), Dictionary> customHeaders = null, CancellationToken cancellationToken = default(CancellationToken)); ``` The interface allows me to provide custom headers and cancellation tokens for each operation. Nice! Also notice the XML comments, some of them (summary, API parameters) are coming from the Swagger document. XML comments are also added to generated models. The implementation handles all the nitty gritty details of constructing the request and handling the response. Note that it respects response codes that we insured to be present in our Swagger doc: ``` // sending request is omitted HttpStatusCode _statusCode = _httpResponse.StatusCode; cancellationToken.ThrowIfCancellationRequested(); string _responseContent = null; if ((int)_statusCode != 201 && (int)_statusCode != 400 && (int)_statusCode != 404) { var ex = new HttpOperationException(string.Format("Operation returned an invalid status code '{0}'", _statusCode)); ex.Request = new HttpRequestMessageWrapper(_httpRequest, _requestContent); ex.Response = new HttpResponseMessageWrapper(_httpResponse, _responseContent); if (_shouldTrace) { ServiceClientTracing.Error(_invocationId, ex); } _httpRequest.Dispose(); if (_httpResponse != null) { _httpResponse.Dispose(); } throw ex; } // Create Result var _result = new HttpOperationResponse(); _result.Request = _httpRequest; _result.Response = _httpResponse; // Deserialize Response if ((int)_statusCode == 201) { _responseContent = await _httpResponse.Content.ReadAsStringAsync().ConfigureAwait(false); try { _result.Body = SafeJsonConvert.DeserializeObject(_responseContent, this.DeserializationSettings); } catch (JsonException ex) { _httpRequest.Dispose(); if (_httpResponse != null) { _httpResponse.Dispose(); } throw new SerializationException("Unable to deserialize the response.", _responseContent, ex); } } if (_shouldTrace) { ServiceClientTracing.Exit(_invocationId, _result); } return _result; ``` If the response contains anything besides expected 201, 400 or 404 it will throw as the service is behaving in an undocumented way. Note that the method returns `HttpOperationResponse` that may or may not contain the actual payload. It is your responsibility to check for documented 40x responses. #### Authentication Most APIs require authentication of some kind and because we used `-AddCredentials true` command line option AutoRest generated a special version of the client for us that allows us to provide credentials. ``` var credentials = new TokenCredentials(""); var client = new BookFast.Client.BookFastAPI(new Uri("http://localhost:50960", UriKind.Absolute), credentials); var result = await client.CreateBookingWithHttpMessagesAsync("12345", new BookFast.Client.Models.BookingData { FromDate = DateTime.Parse("2016-05-01"), ToDate = DateTime.Parse("2016-05-08") }); ``` `Microsoft.Rest.ClientRuntime` provides two variants of credentials that can be passed to the constructor of our client: `TokenCredentials` and `BasicAuthenticationCredentials`. If you use a custom authentication mechanism you can create your own implementation of `ServiceClientCredentials`. Its job is to add necessary details to the request object before it will be sent over the wire. Do you guys still manually write clients for your APIs?

Protecting your APIs with Azure Active Directory

$
0
0

When building web APIs you inevitably have to decide on your security strategy. When making this important decision you want to go with a solution that is rock solid, scales well and enables modern work flows for users accessing your APIs from variety of devices as well as for other systems and components that may take advantage of integrating with your APIs. Azure Active Directory is a great SAAS offering that hits the spot when considering these factors.

In this post I'm going to demonstrate how you can quickly protect your ASP.NET Core based APIs with Azure AD. I won't go into much detail on AD internals and configuration tweaks to keep this post sane and in control but I'm planning a series of posts to dive deep into these topics.

Creating API application in Azure AD

I'm going to be using my Book Fast API sample playground app and I want to protect it with Bearer tokens issued by Azure AD.

For an application to be recognized and protected by Azure AD it needs to be registered in it as, well, an application. That is true both for your APIs as well as your consuming apps. Let's go to the Active Directory section on the portal. You still get redirected to the classic portal to manage your AD tenants. On the 'Applications' tab you can choose to create a new app that 'your organization is developing'. You need to provide 4 things:

  1. App name, obviously. I'm going to use 'book-fast-api'.
  2. App type. In our case it's 'Web application and/or Web API'.
  3. Sign-on URL. This is not important for API apps.
  4. App ID URI. This is an important setting that uniquely defines you application. It will also be the value of the 'resource' that consumers will request access tokens for. It has to be a valid URI and you normally use your tenant address as part of it. My test tenant is 'devunleashed.onmicrosoft.com' so I set the app ID URI to 'https://devunleashed.onmicrosoft.com/book-fast-api'.

New Azure AD dialog

That's it. We have just created the app that can be accessed by other apps on behalf of their users. This is an important point! Azure AD by default configures apps so that they provide a delegated permission for other apps to access them on behalf of the signed in user.

See that 'Manage manifest' button at the bottom of the portal page of your application? Click it and choose to download the manifest.

"oauth2Permissions": [{"adminConsentDescription": "Allow the application to access book-fast-api on behalf of the signed-in user.","adminConsentDisplayName": "Access book-fast-api","id": "60260462-0895-4c20-91da-2b417a0bd41c","isEnabled": true,"type": "User","userConsentDescription": "Allow the application to access book-fast-api on your behalf.","userConsentDisplayName": "Access book-fast-api","value": "user_impersonation"
}]

oauth2Permissions collection defines delegated permissions your app provides to other apps. We will get back to assigning this permission to a client application later in this post but for now let's go to Visual Studio and enable Bearer authentication in the ASP.NET Core project containing our APIs.

Enabling Bearer authentication in ASP.NET Core

There are a bunch of authentication middleware packages available for various scenarios and the one we need in our case is Microsoft.AspNet.Authentication.JwtBearer.

"dependencies": {"Microsoft.AspNet.Authentication.JwtBearer": "1.0.0-rc1-final"
}

Looking at the package name you probably have guessed that it understands JSON Web Tokens. In fact, OAuth2 spec doesn't prescribe the format for access tokens.

Access tokens can have different formats, structures, and methods of utilization (e.g., cryptographic properties) based on the resource server security requirements.

Azure AD uses JWT for its access tokens that are obtained from OAuth2 token endpoints and thus this package is exactly what we need.

Once we've added the package we need to configure the authentication middleware.

public void ConfigureServices(IServiceCollection services)
{
    services.Configure<AuthenticationOptions>(configuration.GetSection("Authentication:AzureAd"));
}

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory, IOptions<AuthenticationOptions> authOptions)
{
    loggerFactory.AddConsole(Configuration.GetSection("Logging"));
    loggerFactory.AddDebug();

    app.UseIISPlatformHandler();
    app.UseJwtBearerAuthentication(options =>
                                   {
                                       options.AutomaticAuthenticate = true;
                                       options.AutomaticChallenge = true;
                                       options.Authority = authOptions.Value.Authority;
                                       options.Audience = authOptions.Value.Audience;
                                   });
    app.UseMvc();
}

AutomaticAuthenticate flag tells the middleware to look for the Bearer token in the headers of incoming requests and, if one is found, validate it. If validation is successful the middleware will populate the current ClaimsPrincipal associated with the request with claims (and potentially roles) obtained from the token. It will also mark the current identity as authenticated.

AutomaticChallenge flag tells the middleware to modify 401 responses that are coming from further middleware (MVC) and add appropriate challenge behavior. In case of Bearer authentication it's about adding the following header to the response:

HTTP/1.1 401 Unauthorized
WWW-Authenticate: Bearer

Authority option defines the tenant URL in Azure AD that issued the token. It consists of two parts: Azure AD instance URL, in my case this is 'https://login.microsoftonline.com/' and tenant ID which is a GUID that you can look up by opening the 'View endpoints' dialog on the portal. Alternately, you can also use a domain based tenant identifier which normally in the form of '.onmicrosoft.com' but Azure AD also allows you to assign custom domains to your tenants. So in my case I could either use 'https://login.microsoftonline.com/70005c1f-ea47-488e-8f57-c3543485f1d0' or 'https://login.microsoftonline.com/devunleashed.onmicrosoft.com'.

In order to validate the token, JwtBearerMiddleware actually relies on OpenID Connect metadata endpoints provided by the authority to get details on encryption keys and algorithms that were used to sign the token. Even though I'm trying to stay with bare bones OAuth2 in this post it's worth mentioning that OpenID Connect solves many of the concerns that are not covered (defined) in OAuth2 spec and the existing middleware takes advantage of it. Azure AD of course fully supports it but this is a topic for another post.

The final important option to set is Audience. When issuing access tokens Azure AD requires the callers to provide a resource name (or intended audience) that they want to access using the token. This intended audience will be included as a claim in the token and will be verified by JwtBearerMiddleware when validating the token. When we created an application for Book Fast API we provided App ID URI (https://devunleashed.onmicrosoft.com/book-fast-api) which we will use as the resource identifier.

That's basically it. The way you enforce authentication on your MVC controllers and/or actions is a good old AuthorizeAttribute that will return 401 if the current principal is not authenticated.

Handling authentication errors

What should happen when an invalid or expired token has been provided? Ideally the middleware should trigger the same challenge flow as if no token was provided. The middleware allows you to handle authentication failure situations by providing an OnAuthenticationFailed callback method in JwtBearerEvents object which is part of JwtBearerOptions that we have just configured above.

Unfortunately, RC1 version of Microsoft.AspNet.Authentication.JwtBearer has a bug in the way it tries to handle our decision that we make in the OnAuthenticationFailed. No matter if we choose to HandleResponse or SkipToNextMiddleware it will try to instantiate a successful AuthenticationResult with no authentication ticket and of course this idea is not going to work. Looking at the dev branch I see there has been some refactoring in the way that the authentication events are handled and hopefully the issue has been resolved.

In the meantime I've created a fixed version of the middleware targeting RC1 that allows you to skip to the next middleware if token validation fails which will allow the processing to hit the AuthorizeAttribute and retrigger the automatic challenge on 401:

var jwtBearerOptions = new JwtBearerOptions
                       {
                           AutomaticAuthenticate = true,
                           AutomaticChallenge = true,
                           Authority = authOptions.Value.Authority,
                           Audience = authOptions.Value.Audience,

                           Events = new JwtBearerEvents
                                    {
                                        OnAuthenticationFailed = ctx =>
                                                                 {
                                                                     ctx.SkipToNextMiddleware();
                                                                     return Task.FromResult(0);
                                                                 }
                                    }
                       };
app.UseMiddleware<CustomJwtBearerMiddleware>(jwtBearerOptions);

Alternately, we could call ctx.HandleResponse() and construct the challenge response ourselves to avoid hitting MVC middleware. But I prefer my version as it will allow calls with invalid tokens to endpoints that don't require authentication and/or authorization. In fact, the ultimate decision on whether the caller should be challenged or not should be made by the authorization filters.

OAuth2 Client Credentials Grant flow

I can't finish this post without demonstrating a client application calling our protected API. OAuth2 spec defines both interactive as well as non-interactive flows. Interactive flows are used in scenarios when users give their consent to client applications to access resources on their behalf and non-interactive imply that the client application possess all of the credentials they need to access resources on their own.

In this post I'm going to demonstrate the Client Credentials Grant flow that is used for server-to-server internal calls.

OAuth2 Client Credential Grant

This flow is meant to be used with confidential clients, i.e. clients that are running on the server as opposed to those running on user devices (which are often referred to as 'public clients'). Confidential clients provide their client ID and client secret in the requests for access tokens. The resources they ask tokens for are accessed from their application's context rather than from their user's (resource owner's) context. That makes perfect sense as there are no user credentials involved.

Provisioning a client application in Azure AD

Steps for provisioning a client app are the same as for the API app. The app type is still 'Web application and/or Web API' which indicates that we are creating a confidential client.

On the 'Configure' tab we need to create a client key (secret) Keep it safe as the portal won't display it the next time you get back to the app's page.

Hit 'Save' and let's give it a ride.

Testing Client Credentials Grant flow

First let's hit the API without any token to make sure it's guarded:

GET https://localhost:44361/api/bookings HTTP/1.1
Host: localhost:44361


HTTP/1.1 401 Unauthorized
Content-Length: 0
Server: Kestrel
WWW-Authenticate: Bearer

Let's request a token from Azure AD (don't forget to URL encode your client secret!):

POST https://login.microsoftonline.com/70005c1f-ea47-488e-8f57-c3543485f1d0/oauth2/token HTTP/1.1
Content-Type: application/x-www-form-urlencoded
Host: login.microsoftonline.com
Content-Length: 197

resource=https://devunleashed.onmicrosoft.com/book-fast-api&grant_type=client_credentials&client_id=119f1731-3fd4-4c3d-acbc-2455879b0d54&client_secret=<client secret>


HTTP/1.1 200 OK
Cache-Control: no-cache, no-store
Pragma: no-cache
Content-Type: application/json; charset=utf-8
Content-Length: 1304

{
	"token_type": "Bearer","expires_in": "3599","expires_on": "1461341991","not_before": "1461338091","resource": "https://devunleashed.onmicrosoft.com/book-fast-api","access_token": "<token value>"
}

Note that Client Credentials Grant doesn't return a refresh token because well it's useless in this case as you can always use your client credentials to request a new access token.

Let's call our API with the access token:

GET https://localhost:44361/api/bookings HTTP/1.1
Authorization: Bearer <token value>
Host: localhost:44361


HTTP/1.1 500 Internal Server Error
Content-Length: 0
Server: Kestrel

Well it failed miserably but trust me it's not related to the authentication part. The problem is that we are trying to get pending booking requests of a user and the application tries to get a user name from the current principal's claims. It's specifically looking for the claim of type 'http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name' and it can't find it. And 500 is the correct response code here because we apparently screwed up the app logic here. User booking requests are expected to be queried under user context only, not under application context.

But no, don't take my words for granted. I am actually going to prove to you that authentication succeeded. Here's the debug output:

Microsoft.AspNet.Hosting.Internal.HostingEngine: Information: Request starting HTTP/1.1 GET http://localhost:44361/api/bookings  
Microsoft.AspNet.Authentication.JwtBearer.JwtBearerMiddleware: Information: HttContext.User merged via AutomaticAuthentication from authenticationScheme: Bearer.
Microsoft.AspNet.Authorization.DefaultAuthorizationService: Information: Authorization was successful for user: .
Microsoft.AspNet.Mvc.Controllers.ControllerActionInvoker: Information: Executing action method BookFast.Api.Controllers.BookingController.List with arguments () - ModelState is Valid'
...
...
Microsoft.AspNet.Server.Kestrel: Error: An unhandled exception was thrown by the application.
System.Exception: Claim 'http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name' was not found.

There is no user! It should remind us of the intended use of the Client Credentials Grant. We will try another OAuth2 flow a bit later but now let's take a break and have a look at the access token and take this opportunity to examine its content and better understand how token validation works.

Access token validation

Remember that Azure AD access tokens are JWT? And as such they consist of 2 Based64 endcoded JSON parts (header and payload) plus a signature. You can easily decode them, for example, with the Text Wizard tool in Fiddler:

Azure AD access token decoded with Text Wizard

And here's the readable part:

{"typ": "JWT","alg": "RS256","x5t": "MnC_VZcATfM5pOYiJHMba9goEKY","kid": "MnC_VZcATfM5pOYiJHMba9goEKY"
}
{"aud": "https://devunleashed.onmicrosoft.com/book-fast-api","iss": "https://sts.windows.net/70005c1f-ea47-488e-8f57-c3543485f1d0/","iat": 1461338091,"nbf": 1461338091,"exp": 1461341991,"appid": "119f1731-3fd4-4c3d-acbc-2455879b0d54","appidacr": "1","idp": "https://sts.windows.net/70005c1f-ea47-488e-8f57-c3543485f1d0/","oid": "970c6d5c-e200-481c-a134-6d0287f3c406","sub": "970c6d5c-e200-481c-a134-6d0287f3c406","tid": "70005c1f-ea47-488e-8f57-c3543485f1d0","ver": "1.0"
}

The aud claim contains the intended audience that this token was requested for. JwtBearerMiddleware will compare it with the Audience property that we set when enabling it and will reject tokens should they contain a different value for the audience.

Another important claim is iss that represents the issuer STS and it is also verified when validating the token. But what is it compared to? And how does JwtBearerMiddleware validate the token's signature after all?

The middleware we use takes advantage of OpenID Connect discovery to get the data it needs. If you trace/capture HTTP traffic on the API app side with Fiddler you will discover that the API app makes 2 calls to Azure AD when validating the token. The first call is to the discovery endpoint. It's URL is formed as '/.well-known/openid-configuration':

GET https://login.microsoftonline.com/70005c1f-ea47-488e-8f57-c3543485f1d0/.well-known/openid-configuration HTTP/1.1


HTTP/1.1 200 OK
Cache-Control: private
Content-Type: application/json; charset=utf-8
Content-Length: 1239

{
	"authorization_endpoint": "https://login.microsoftonline.com/70005c1f-ea47-488e-8f57-c3543485f1d0/oauth2/authorize","token_endpoint": "https://login.microsoftonline.com/70005c1f-ea47-488e-8f57-c3543485f1d0/oauth2/token","token_endpoint_auth_methods_supported": ["client_secret_post","private_key_jwt"],"jwks_uri": "https://login.microsoftonline.com/common/discovery/keys","response_modes_supported": ["query","fragment","form_post"],"subject_types_supported": ["pairwise"],"id_token_signing_alg_values_supported": ["RS256"],"http_logout_supported": true,"response_types_supported": ["code","id_token","code id_token","token id_token","token"],"scopes_supported": ["openid"],"issuer": "https://sts.windows.net/70005c1f-ea47-488e-8f57-c3543485f1d0/","claims_supported": ["sub","iss","aud","exp","iat","auth_time","acr","amr","nonce","email","given_name","family_name","nickname"],"microsoft_multi_refresh_token": true,"check_session_iframe": "https://login.microsoftonline.com/70005c1f-ea47-488e-8f57-c3543485f1d0/oauth2/checksession","end_session_endpoint": "https://login.microsoftonline.com/70005c1f-ea47-488e-8f57-c3543485f1d0/oauth2/logout","userinfo_endpoint": "https://login.microsoftonline.com/70005c1f-ea47-488e-8f57-c3543485f1d0/openid/userinfo"
}

Lots of metadata here including the issuer value and the jwks_uri endpoint address to get the keys to validate the token's signature:

GET https://login.microsoftonline.com/common/discovery/keys HTTP/1.1


HTTP/1.1 200 OK
Cache-Control: private
Content-Type: application/json; charset=utf-8
Content-Length: 2932

{
	"keys": [{"kty": "RSA","use": "sig","kid": "MnC_VZcATfM5pOYiJHMba9goEKY","x5t": "MnC_VZcATfM5pOYiJHMba9goEKY","n": "vIqz-4-ER_vNWLON9yv8hIYV737JQ6rCl6X...","e": "AQAB","x5c": ["<X.509 Certificate Chain>"]
	},
	{"kty": "RSA","use": "sig","kid": "YbRAQRYcE_motWVJKHrwLBbd_9s","x5t": "YbRAQRYcE_motWVJKHrwLBbd_9s","n": "vbcFrj193Gm6zeo5e2_y54Jx49sIgScv-2J...","e": "AQAB","x5c": ["<X.509 Certificate Chain>"]
	}]
}

Token signing is implemented according to JSON Web Key spec. Using Key ID and X.509 certificate thumbprint values from the token's header (kid and x5t parameters respectively) the middleware is able to find the appropriate public key in the obtained collection of keys to verify the signature.

OAuth2 Resource Owner Password Credentials Grant flow

Let's fix our 500 issue with Book Fast API and try to get a list of booking requests under a user context. OAuth2 and OpenID Connect provide interactive flows that include secure gathering of user credentials but to keep this post short I'm going to demonstrate a simpler flow called Resource Owner Credentials Grant.

When developing new applications you should not use this flow as it requires your client applications to gather user credentials. This, in turn, lays the ground for all kinds of bad practices like, for instance, a temptation to preserve the credentials in the usable form to be able to make internal calls on behalf of users. It also puts the burden of maintaining user credentials (password resets, two factor auth, etc) on your shoulders.

This flow can be used though in legacy applications that are being re-architectured (such as adopting Azure AD and delegated access to services) as an intermediate solution.

OAuth2 Resource Owner Credentials Grant

Ok, back to the 'Configure' page of the client app! We need to give it a delegated permission to call Book Fast API. Use 'Add application' button to find and add 'book-fast-api' to the list of apps and then select the delegated permission.

Giving the client a delegated permission to access book-fast-api

Note that the 'Access book-fast-api' permission is coming from the oauth2Permissions collection that we saw in the API's app manifest earlier.

If you do this under your admin account you essentially provide an admin consent for the client app to call the API app on behalf of any user of the tenant. It fits the current flow perfectly as there is no way for users to provide their consent to Active Directory as they don't go to its login pages.

Requesting a token now requires user credentials and the grant type of password:

POST https://login.microsoftonline.com/70005c1f-ea47-488e-8f57-c3543485f1d0/oauth2/token HTTP/1.1
Content-Type: application/x-www-form-urlencoded
Host: login.microsoftonline.com
Content-Length: 260

resource=https://devunleashed.onmicrosoft.com/book-fast-api&grant_type=password&client_id=119f1731-3fd4-4c3d-acbc-2455879b0d54&client_secret=<client secret>&username=newfella@devunleashed.onmicrosoft.com&password=<user password>


HTTP/1.1 200 OK
Cache-Control: no-cache, no-store
Pragma: no-cache
Content-Type: application/json; charset=utf-8
Content-Length: 2204

{
	"token_type": "Bearer","scope": "user_impersonation","expires_in": "3599","expires_on": "1461602199","not_before": "1461598299","resource": "https://devunleashed.onmicrosoft.com/book-fast-api","access_token": "<access token value>","refresh_token": "<refresh token value>"
}

Same as other delegated flows, Resource Owner Password Grant also allows for an optional refresh token to be returned from the token endpoint. This token can be used by the client to ask for new access tokens without bothering the user to re-enter her credentials.

Let's have a quick glance at the access token:

{"aud": "https://devunleashed.onmicrosoft.com/book-fast-api","iss": "https://sts.windows.net/70005c1f-ea47-488e-8f57-c3543485f1d0/","iat": 1461598299,"nbf": 1461598299,"exp": 1461602199,"acr": "1","amr": ["pwd"],"appid": "119f1731-3fd4-4c3d-acbc-2455879b0d54","appidacr": "1","ipaddr": "86.57.158.18","name": "New Fella","oid": "3ea83d38-dad6-4576-9701-9f0e153c32b5","scp": "user_impersonation","sub": "Qh3Yqwk86aMN8Oos_xCEDZcV2cfGi7PTl-5uSSgF4uE","tid": "70005c1f-ea47-488e-8f57-c3543485f1d0","unique_name": "newfella@devunleashed.onmicrosoft.com","upn": "newfella@devunleashed.onmicrosoft.com","ver": "1.0"
}

Now it contains claims mentioning my 'newfella@devunleashed.onmicrosoft.com' user and something tells me we're going to have a better luck calling the Book Fast API now!

GET https://localhost:44361/api/bookings HTTP/1.1
Authorization: Bearer <access token>


HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Server: Kestrel
Content-Length: 663

[{
	"Id": "7e63dd0c-0910-492f-a34b-a05d995455ce","AccommodationId": "2c998dc6-1b90-4ba1-9885-5169e5c83c79","AccommodationName": "Queen's dream","FacilityId": "c08ffa8d-87fa-4315-8a54-0e744b33e7f7","FacilityName": "First facility","StreetAddress": "11, Test str.","FromDate": "2016-06-10T00:00:00+03:00","ToDate": "2016-06-18T00:00:00+03:00"
},
{"Id": "4e7f165f-a1d2-48ce-9b14-d2d8d5c04750","AccommodationId": "2c998dc6-1b90-4ba1-9885-5169e5c83c79","AccommodationName": "Queen's dream","FacilityId": "c08ffa8d-87fa-4315-8a54-0e744b33e7f7","FacilityName": "First facility","StreetAddress": "11, Test str.","FromDate": "2016-05-22T00:00:00+03:00","ToDate": "2016-05-30T00:00:00+03:00"
}]

Event correlation in Application Insights

$
0
0

Application Insights uses several contextual properties for event correlation. The most generic one is Operation Id that allows us to analyze a series of events and traces as part of a single operation. Depending on the application type there can be additional correlation properties. For example, if we're talking about web requests these are also Session Id and User Id that allow us to group events and traces by the security and session context in which they occurred.

In a lot of applications that incorporate various micro and not only services it is often important to correlate events that happen across these services. It gives us a business workflow view of the various events that happen in the application, its components and services. It requires us to implement operation or activity Id management and propagation. To demonstrate this I'm going to show you how to propagate an Operation Id of a web request that's made to the FixItApp to a background task running as a WebJob that is triggered through a storage queue.

Demo project

FixItApp has an option to persist created tasks asynchronously using a background process.

There is a setting in web.config called 'UseQueues' that needs to be set to true. If you're running in Azure Web App you can set this property on the portal instead. There is also a continuous WebJob that's triggered by the storage queue where the application sends messages about created tasks. You can deploy the WebJob to Azure or run it locally.

Run the application and create a FixIt task. Make sure to add a picture to upload with it. Then open up the Application Insights Search blade and look for the POST web request event. On the event properties blade click the three dots button to show all properties.

Web request properties

Among others you can see correlation properties such as Operation Id, Session Id and User Id. Right from this blade you can search for telemetry events that are associated with these properties. Right-click the Operation Id property and select Search.

Search by Operation Id (no database call trace message)

There are the request event itself, our custom Create event from LoggingTaskService, a custom trace for the image upload call and two dependency calls to Azure blob storage. This is the same telemetry you would get if you chose 'Show all telemetry for this request' link on the POST request overview blade that's shown above.

The actual database call is made by the WebJob and is not associated with the request which is technically correct as it happened asynchronously in a separate process. But what if we want to correlate it with the original request that triggered the operation?

Propagating Operation Id to the WebJob

Before we can propagate Operation Id from the web application to the background process we need to understand how it gets managed. When you create an instance of TelemetryClient it's context is empty. It will get populated by the initializer when it's time to send data to Application Insights. This is done by telemetry initializers. There are some default ones and you can add your own.

If you open up ApplicationInsights.config in the web application you will see that there is a default Operation Id initializer called OperationIdTelemetryInitializer from Microsoft.ApplicationInsights.Web namespace. It sets ITelemetry.Context.Operation.Id property of our TelemetryClient instances with the Id from the RequestTelemetry object. RequestTelemetry is a special type of telemetry object that's initialized when the request event is captured by Application Insights.

So for any TelemetryClient instance that we create in scope of a request we're going to be using RequestTelemetry.Id property value as an Operation Id. If we want to propagate this value to other processes we need to take it from RequestTelemetry.Id, and not from TelemetryClient.Context.Operation.Id as it doesn't get initialized immediately.

We need to grab the value as we start processing the request and save it somewhere so we could use it when making a request to a remote service. As RequestTelemetry is persisted in the Items collection of HttpContext we can write an action filter like this:

[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false)]
public class AiCorrelationAttribute : FilterAttribute, IActionFilter
{
    private const string RequestTelemetryKey = "Microsoft.ApplicationInsights.RequestTelemetry";

    public void OnActionExecuting(ActionExecutingContext filterContext)
    {
        if (filterContext.HttpContext.Items.Contains(RequestTelemetryKey))
        {
            var requestTelemetry = filterContext.HttpContext.Items[RequestTelemetryKey] as RequestTelemetry;
            if (requestTelemetry == null)
                return;

            CorrelationManager.SetOperationId(requestTelemetry.Id);
        }
    }

    public void OnActionExecuted(ActionExecutedContext filterContext)
    {
    }
}

We can register this filter in the global filter collection so it gets fired upon each request. The CorrelationManager is a convenient component to persist the Operation Id in the logical CallContext:

namespace MyFixIt.Common
{
    public static class CorrelationManager
    {
        private const string OperationIdKey = "OperationId";

        public static void SetOperationId(string operationId)
        {
            CallContext.LogicalSetData(OperationIdKey, operationId);
        }

        public static string GetOperationId()
        {
            var id = CallContext.LogicalGetData(OperationIdKey) as string;
            return id ?? Guid.NewGuid().ToString();
        }
    }
}

By storing it in the logical CallContext we make it available from anywhere and we're going to use it when sending a queue message:

public async Task SendMessageAsync(FixItTask fixIt)
{
    CloudQueue queue = queueClient.GetQueueReference(FixitQueueName);
    await queue.CreateIfNotExistsAsync();

    var fixitJson = JsonConvert.SerializeObject(new FixItTaskMessage
                    {
                        Task = fixIt,
                        OperationId = CorrelationManager.GetOperationId()
                    });
    CloudQueueMessage message = new CloudQueueMessage(fixitJson);

    await queue.AddMessageAsync(message);
}

Because we use a queue to communicate with the WebJob we need to pass the Operation Id as part of the message. If we were talking to a remote web service we could use a custom header (HTTP or SOAP depending on the type of the service).

The WebJob is a console application and we haven't added any special Application Insights components to it except for Microsoft.ApplicationInsights package so we could do custom tracing with TelemetryClient. Thus we need to first get the Operation Id from the message as we start processing it and make sure to initialize TelemetryClient instance(s) with it.

The first part is accomplished right in the job method:

public class TaskJob
{
    private readonly IFixItTaskRepository repository;

    public TaskJob(IFixItTaskRepository repository)
    {
        this.repository = repository;
    }

    public async Task ProcessQueueMessage([QueueTrigger("fixits")] FixItTaskMessage message,
        TextWriter log)
    {
        CorrelationManager.SetOperationId(message.OperationId);

        await repository.CreateAsync(message.Task);

        log.WriteLine("Created task {0}", message.Task.Title);
    }
}

We use the CorrelationManager again to persist the Operation Id in the logical CallContext.

Then we need to add a custom telemetry initializer to TelemetryConfiguration so that we could pass the Operation Id to TelemetryClient instances:

private static void InitializeAppInsights()
{
    TelemetryConfiguration.Active.InstrumentationKey =
        ConfigurationManager.AppSettings["ApplicationInsights.InstrumentationKey"];
    TelemetryConfiguration.Active.TelemetryInitializers.Add(new CorrelatingTelemetryInitializer());
}

The telemetry initializer looks as simple as this:

internal class CorrelatingTelemetryInitializer : ITelemetryInitializer
{
    public void Initialize(ITelemetry telemetry)
    {
        telemetry.Context.Operation.Id = CorrelationManager.GetOperationId();
    }
}

Testing Operation Id propagation

We're ready to test our solution. Run the same task creation operation as before and check out the POST request on the portal.

Search by Operation Id (this time it shows the database call trace message)

This time the database call that was done by the WebJob is listed as part of request events. What's really cool is that when we see this single trace we can navigate to a web request that happened on the web server that eventually triggered this operation.

Database call trace event with the link to web request

Now it needs to be understood that sometimes it may not be desirable. We're seeing the event as part of a request while it's technically not and may have well happened on another machine. Whether you want to propagate Operation Id to background tasks or not will depend on your particular scenario. You may choose a custom property instead of Operation Id that you can set when calling Track* methods on TelemetryClient. You will be able to search by the custom property on the portal.

In background processes such as WebJobs you can use a custom telemetry initializer to at least associate all of the events that happen as part of the background operation and you can propagate your custom correlation property to make the background operation a part of a larger activity.

Azure Web Apps Continuous Deployment

$
0
0

Azure Web Apps provide a continuous deployment feature that allows you to quickly set up a continuous build and deployment process from your code repository. It implements a pull model when your repository is cloned to your web app, changes are pulled and the application is built when the web app gets notified from your source code hosting service and then deployed artifacts get copied to wwwroot folder. This is different from a more traditional model where you set up a build server that takes care of pulling sources, building them and preparing a deployment package that gets uploaded to the hosting environment.

The pull model is simpler as you get continuous deployment right from your code repository without having to worry setting up a separate build server somewhere. It works because build and other tools are preinstalled on VMs running web apps. The infrastructure that powers you web apps including the continuous deployment process is called Kudu. In some cases the process works seamlessly as it supports different types of apps and stacks. But often you need to tweak things here and there and thus you need to have a general understanding of how the process works.

Setup

Once you create a web app you navigate to its Setting blade and locate the Continuous Deployment option. You have a bunch of supported source options from local GIT repo to hosted GIT or Mercurial repos. You can also pull from OneDrive and Dropbox folders so you can implement a hybrid model where you build and prepare packages and put them on OneDrive or Dropbox and then have Azure pull those packages and extract them into wwwroot. Check out this post on how you could deploy a Java based web application from DropBox using a pre-built WAR file.

Azure App Services Continuous Deployment - Source Selection

For now let’s pick Bitbucket and set up a repository and branch that we want to pull source code from. Azure will set up a clone repository in your web app that you can check out either through Kudu console or by connecting to your web app over FTP and navigating to %home%/site/repository:

Repository folder

Once cloned the initial deployment will be triggered. Subsequent deployments will be triggered when new commits are added to the branch that you specified when you paired with web app with a repository. The notification mechanism may vary. For example, as of time of this writing integration with Bitbucket is implemented through its POST services but in the future it will be transformed to Web Hooks.

Deployment logs and auto generated scripts can be found in %home%/site/deployments folder. Deployment log will usually contain messages about generating the deployment script or executing a custom one for your repo, output of the deployment script and KuduSync process.

Deployment log

Deployment script

If no custom build and deployment script is provided Kudu takes care of generating one automatically based on the type of application that it detects from your code repo. Generation is done with Azure cross platform CLI tool (azure-xplat-cli). From the log file shown above you can see that Kudu detected an ASP.NET web application and ran the following command to generate the script:

azure -y --no-dot-deployment -r "D:\home\site\repository" -o "D:\home\site\deployments\tools"
    --aspWAP "D:\home\site\repository\TestWebApp\TestWebApp.csproj"
    --solutionFile "D:\home\site\repository\TestWebApp.sln"

Azure CLI supports ASP.NET application and web site projects, ASP.NET 5 projects, Node, Python and PHP applications as well as .NET console applications which can be used to create web jobs.

The generated deployment script can be found in %home%/site/deployments/tools folder together with a cache key file containing the command that was used to generate the script. As I mentioned earlier in some cases the generated script will be sufficient but often you may need to provide your own.

To make Kudu use your custom deployment script you need to add a file called .deployment to the root of your repo containing a line that specified what script to run:

[config]
command = deploy.cmd

This instructs Kudu to skip generation of the deployment script and run deploy.cmd that is also located in the root of the repository.

ASP.NET web application projects

You get a pretty good support out of the box for this type projects. If you look at the generated script you will see 3 distinct actions:

  1. Restore NuGet packages for the solution
  2. Run MSBuild to build and package the application in a temporary folder
  3. Run KuduSync to move the package to wwwroot
:: Deployment
:: ----------

echo Handling .NET Web Application deployment.

:: 1\. Restore NuGet packages
IF /I "WebApplication2.sln" NEQ "" (
  call :ExecuteCmd nuget restore "%DEPLOYMENT_SOURCE%\WebApplication2.sln"
  IF !ERRORLEVEL! NEQ 0 goto error
)

:: 2\. Build to the temporary path
IF /I "%IN_PLACE_DEPLOYMENT%" NEQ "1" (
  call :ExecuteCmd "%MSBUILD_PATH%" "%DEPLOYMENT_SOURCE%\WebApplication\WebApplication2.csproj" /nologo /verbosity:m
    /t:Build /t:pipelinePreDeployCopyAllFilesToOneFolder
    /p:_PackageTempDir="%DEPLOYMENT_TEMP%";AutoParameterizationWebConfigConnectionStrings=false;Configuration=Release
    /p:SolutionDir="%DEPLOYMENT_SOURCE%\.\\" %SCM_BUILD_ARGS%
) ELSE (
  call :ExecuteCmd "%MSBUILD_PATH%" "%DEPLOYMENT_SOURCE%\WebApplication\WebApplication2.csproj" /nologo /verbosity:m
    /t:Build /p:AutoParameterizationWebConfigConnectionStrings=false;Configuration=Release
    /p:SolutionDir="%DEPLOYMENT_SOURCE%\.\\" %SCM_BUILD_ARGS%
)

IF !ERRORLEVEL! NEQ 0 goto error

:: 3\. KuduSync
IF /I "%IN_PLACE_DEPLOYMENT%" NEQ "1" (
  call :ExecuteCmd "%KUDU_SYNC_CMD%" -v 50 -f "%DEPLOYMENT_TEMP%" -t "%DEPLOYMENT_TARGET%"
    -n "%NEXT_MANIFEST_PATH%" -p "%PREVIOUS_MANIFEST_PATH%" -i ".git;.hg;.deployment;deploy.cmd"
  IF !ERRORLEVEL! NEQ 0 goto error
)

In-place deployment is not used by default for this type of project so the application is built and packaged to %DEPLOYMENT_TEMP% directory.

KuduSync

What’s KuduSync? This is a Node tool that syncs files between directories. It was created specifically to cover the needs of app services (originally web sites) but in fact it can be used anywhere. To install it run the following command (given that Node is already present on your machine):

npm install kudusync -g

When run with the –g flag node packages and apps get installed “globally” in your user’s profile. On Windows they get installed to c:\Users{userName}\AppData\Roaming\npm\ directory. Then you can run KuduSync with a command similar to the one from the deployment script:

kudusync -f "d:\dev\temp\WebApplication2" -t "d:\dev\temp\target" -n "d:\dev\temp\manifest.txt"

This command copies all files and directories from d:\dev\temp\WebApplication2 folder to d:\dev\temp\target folder. The deployment script shown above copies build artifacts from a temporary folder (%DEPLOYMENT_TEMP%) to wwwroot (%DEPLOYMENT_TARGET%).

Notice the –n required parameter that specifies a new manifest file name. Manifest is a text file listing all of the files with their paths that have been copied during the current run. Now if you look at the deployment script there is also an optional –p parameter that specifies a path to a previous manifest file. With the previous manifest (or snapshot) file KuduSync is able to detect what files need to be removed from the target directory. KuduSync also compares existing files and copies also modified and new ones. Manifest files of actual deployments can be found in %home%/site/deployments/{deploymentId} folders together with deployment logs.

There is also an optional –i parameter that allows you to specify files that should be ignored from the sync.

What about web jobs?

Let’s say we want to add a .NET console app as a web job and we want it to be built and moved to an appropriate directory under App_Data depending on its type. Using Visual Studio we can associate the web job project with the web application by right-clicking on the web application project and selecting “Add/Existing project as Azure WebJob” command. VS tools install Microsoft.Web.WebJobs.Publish package both to the web application as well as to the selected web job project. They also add webjobs-list.json file referencing the web job project to the web project and webjob-publish-settings.json file describing the job type and schedule to the web job console project. These files are important for MSBuild targets from Microsoft.Web.WebJobs.Publish package that get added to your project files.

These steps alone are enough for the web application project type to be built and packaged correctly. As a result of running MSBuild your OnDemand and Scheduled web jobs are placed in App_Data/jobs/triggered folder and continuous web jobs are placed in App_Data/jobs/continuous folder.

However, if your web jobs are supposed to run on schedule there is a problem. When deployed from Visual Studio schedules for these jobs are created in Azure scheduler service for you. When deployed from a build machine you need to write a script to do the same and the script needs to execute within your subscription security context.

Azure team realized the difficulties it had with the continuous deployment process and built another scheduling mechanism in Kudu. The mechanism is based on cron expressions and it makes it as easy to define a schedule for your jobs as adding a settings.job file with an appropriate cron expression to the web job project and setting the build action to copy the file to the output folder. Kudu uses NCrontab package that supports 6 part expressions (seconds, minutes, hours, days, months and days of week). You can find more details about the cron expressions support in web jobs in Amit Apple's blog post.

ASP.NET web site projects

This type of ASP.NET project has been around for a while and although the web application project type has gained a lot more popularity especially since the inception of MVC web sites are still used and of course supported by Azure web apps. Even this blog currently runs on a customized version of MiniBlog which is essentially an ASP.NET Web Pages application.

Anyway, let's have a look at the deployment script that Azure CLI produces for this type of project:

:: Deployment
:: ----------

echo Handling .NET Web Site deployment.

:: 1\. Build to the repository path
call :ExecuteCmd "%MSBUILD_PATH%" "%DEPLOYMENT_SOURCE%\Solution1.sln" /verbosity:m /nologo %SCM_BUILD_ARGS%
IF !ERRORLEVEL! NEQ 0 goto error

:: 2\. KuduSync
IF /I "%IN_PLACE_DEPLOYMENT%" NEQ "1" (
  call :ExecuteCmd "%KUDU_SYNC_CMD%" -v 50 -f "%DEPLOYMENT_SOURCE%\TestWebSite" -t "%DEPLOYMENT_TARGET%"
    -n "%NEXT_MANIFEST_PATH%" -p "%PREVIOUS_MANIFEST_PATH%" -i ".git;.hg;.deployment;deploy.cmd"
  IF !ERRORLEVEL! NEQ 0 goto error
)

There are just 2 steps:

  1. MSBuild of the solution
  2. KuduSync from the repository folder to wwwroot

This will be enough in simple cases when you don’t need NuGet package restore and you don’t need to deploy web jobs.

But let’s say we have a solution with an ASP.NET web site project, some class library projects that are referenced by the web site project and a console project for a web job.

The web job project is not referenced by a web site project as it was the case with ASP.NET web application project type. However, there is no way to associate the web job project with the web site. The mechanism based on webjobs-list.json doesn’t seem to work here.

Here’s what you need to do. First, install Microsoft.Web.WebJobs.Publish package to the web job project. The package will add the necessary build targets and webjob-publish-settings.json file. In Visual Studio it can be easily done by right-clicking on the web job project and selecting “Publish as Azure WebJob” command. Fill out details on the presented form (web job type) but do not actually publish the project. As a result you will have webjob-publish-settings.json file added to your project.

Then you need a custom deployment script. On your development machine install Azure CLI:

npm install -g azure-cli

And generate the default script for ASP.NET web site project type. Let’s say your current directory is your solution directory and your web site project is called TestWebSite and your solution is called Solution1:

azure site deploymentscript --aspWebSite --sitePath TestWebSite --solutionFile Solution1.sln

This will create both deploy.cmd and .deployment files in your solution folder. These files need to be committed to source control and they will be used by Kudu instead of automatically generated script.

The generated deploy.cmd gives you a basic structure and ceremony code but you need to update the Deployment part as follows:

:: Deployment
:: ----------

echo Handling .NET Web Site deployment.

IF /I "Solution1.sln" NEQ "" (
  call :ExecuteCmd nuget restore "%DEPLOYMENT_SOURCE%\Solution1.sln"
  IF !ERRORLEVEL! NEQ 0 goto error

  call :ExecuteCmd nuget restore "%DEPLOYMENT_SOURCE%\TestWebSite\packages.config" -SolutionDirectory "%DEPLOYMENT_SOURCE%"
  IF !ERRORLEVEL! NEQ 0 goto error
)

:: 1\. Build to the repository path
call :ExecuteCmd "%MSBUILD_PATH%" "%DEPLOYMENT_SOURCE%\Solution1.sln" /p:Configuration=Release /verbosity:m /nologo %SCM_BUILD_ARGS%
IF !ERRORLEVEL! NEQ 0 goto error

call :ExecuteCmd "%MSBUILD_PATH%" "%DEPLOYMENT_SOURCE%\WebJob1\WebJob1.csproj" /nologo /verbosity:m /t:Build /t:pipelinePreDeployCopyAllFilesToOneFolder
    /p:_PackageTempDir="%DEPLOYMENT_TEMP%";Configuration=Release /p:SolutionDir="%DEPLOYMENT_SOURCE%\.\\"

:: 2\. Package
IF EXIST "%DEPLOYMENT_TEMP%\bin" rd /s /q "%DEPLOYMENT_TEMP%\bin"
xcopy "%DEPLOYMENT_SOURCE%\PrecompiledWeb\localhost_52030" "%DEPLOYMENT_TEMP%" /E

:: 3\. KuduSync
IF /I "%IN_PLACE_DEPLOYMENT%" NEQ "1" (
  call :ExecuteCmd "%KUDU_SYNC_CMD%" -v 50 -f "%DEPLOYMENT_TEMP%" -t "%DEPLOYMENT_TARGET%"
    -n "%NEXT_MANIFEST_PATH%" -p "%PREVIOUS_MANIFEST_PATH%" -i ".git;.hg;.deployment;deploy.cmd"
  IF !ERRORLEVEL! NEQ 0 goto error
)

IF EXIST "%DEPLOYMENT_TEMP%" rd /s /q "%DEPLOYMENT_TEMP%"

There are a few important things to note here:

  • In addition to solution wide package restore we have also added a restore command for the web site itself. It is needed because solution wide restore doesn’t restore packages of the web site itself! Note that when you exclude NuGet packages and .dll files from the Bin directory of the web site you need to make sure that .refresh files are not excluded because this is how web sites reference assemblies from NuGet packages.
  • We added 2 MSBuild commands. The first one precompiles the web site together with all class libraries that it references. The output is placed to PrecomiledWeb/localhost_52030 directory that is configured in the web site project’s settings. The second command builds and packages the web job using pipelinePreDeployCopyAllFilesToOneFolder target. As a result your web job is placed in the correct folder under %DEPLOYMENT_TEMP%/App_Data.
  • Then we move the precompiled web site to %DEPLOYMENT_TEMP% where the web job already is. We need to make sure to remove %DEPLOYMENT_TEMP%/bin folder first that contains the output of the web job project build. We don’t need it anymore as the web job is already in App_Data.
  • We then KuduSync the whole package to wwwroot.

Stand-alone web jobs

In order to be able to scale web apps and web jobs independently or to prevent resource starvation you may want to deploy your web jobs into a separate web app. This scenario is supported for continuous deployment too but you’re going to need to create a custom deployment script as well.

Azure CLI does support console apps but you need to update it:

:: Deployment
:: ----------

echo Handling .NET Console Application deployment.

:: 1\. Restore NuGet packages
IF /I "TestWebJobs\TestWebJobs.sln" NEQ "" (
  call :ExecuteCmd nuget restore "%DEPLOYMENT_SOURCE%\TestWebJobs\TestWebJobs.sln"
  IF !ERRORLEVEL! NEQ 0 goto error
)

:: 2\. Build to the temporary path
call :ExecuteCmd "%MSBUILD_PATH%" "%DEPLOYMENT_SOURCE%\TestWebJobs\WebJob1\WebJob1.csproj" /nologo /verbosity:m
    /t:Build /p:Configuration=Release;OutputPath="%DEPLOYMENT_TEMP%\app_data\jobs\continuous\deployedJob"
    /p:SolutionDir="%DEPLOYMENT_SOURCE%\TestWebJobs\\" %SCM_BUILD_ARGS%
IF !ERRORLEVEL! NEQ 0 goto error

:: 3\. Run web job deploy script
IF DEFINED WEBJOBS_DEPLOY_CMD (
  call :ExecuteCmd "%WEBJOBS_DEPLOY_CMD%"
)

:: 4\. KuduSync
call :ExecuteCmd "%KUDU_SYNC_CMD%" -v 50 -f "%DEPLOYMENT_TEMP%" -t "%DEPLOYMENT_TARGET%"
    -n "%NEXT_MANIFEST_PATH%" -p "%PREVIOUS_MANIFEST_PATH%" -i ".git;.hg;.deployment;deploy.cmd"
IF !ERRORLEVEL! NEQ 0 goto error

As you can see the problem is in the hard coded path %DEPLOYMENT_TEMP%\app_data\jobs\continuous\deployedJob that doesn’t take into account your job’s name and type.

There are two options here. You can either update the path with a correct job name and type or you can take advantage of build targets from Microsoft.Web.WebJobs.Publish similar to how it was done with a web site above. Just remember to add settings.job files with cron expressions to your scheduled jobs!

Implementing Service Bus SAS keys rotation

$
0
0

Shared Access Signature (SAS) authentication provides a simple and flexible option to authenticate requests to Service Bus. You can define access rules on the entire namespace as well as individual entities such as queues, relays, topics and Event Hubs. While this is also possible with ACS authentication, what sets SAS option apart is the ability to grant access to Service Bus entities without giving out keys. This is achieved by issuing SAS tokens (or signatures, although the actual signatures are just part of them) that are bound to particular authorization policies and have a finite lifetime.

In addition to managing SAS token expiration a common requirement is the ability to revoke issued tokens to prevent further undesired access to Service Bus entities and make consumers undergo a procedure of requesting new tokens.

Service Bus, consumer and a SAS token service

SAS tokens include a signature which is a keyed hash (HMAC-SHA256) of the resource URL and the expiration period. By changing both primary and secondary keys of the authorization policy that is used for issued tokens we effectively invalidate these tokens.

It is also recommended to implement rotation of SAS keys on a regular basis so that keys that have been compromised could not be used to access Service Bus. Primary and secondary keys allow us to implement rotation without affecting well behaving consumers. While the key that was used to generate a signature is present in either primary or secondary position the token will be successfully validated by Service Bus. It is recommended to use a primary key to generate tokens and during rotation replace a secondary key with the old primary key and assign a newly generated key to the primary key. It will allow tokens signed with the old primary key to still work if they haven't yet expired.

How do token expiration and keys rotation periods correlate?

It turns out the expiration period should not exceed the rotation one otherwise there will be a chance for a token to span over more than 2 rotation periods after which both keys will be changed.

SAS key rotation periods and token lifetimes

Consumers should request new tokens before their existing ones expire to insure uninterrupted access to Service Bus.

Rotating keys

Let's implement a simple proof of concept. We're going to define separate Read, Write and Manage authorization policies on a Service Bus queue:

Authorization policies of a Service Bus queue

Our POC will contain a token service similar to the one shown above that will be issuing separate SAS tokens for read and write operations against a Service Bus queue:

[RoutePrefix("api")]
public class TokenController : ApiController
{
    private readonly ITokenService tokenService;

    public TokenController(ITokenService tokenService)
    {
        this.tokenService = tokenService;
    }

    [Route("readtoken")]
    public async Task<Token> GetReadToken()
    {
        return new Token { SharedAccessSignature = await tokenService.GetReadSharedAccessSignature() };
    }

    [Route("writetoken")]
    public async Task<Token> GetWriteToken()
    {
        return new Token { SharedAccessSignature = await tokenService.GetWriteSharedAccessSignature() };
    }
}

The service uses a connection string of the Manage policy to get queue description and locate a Read or Write authorization rule.

Service Bus queue authorization policy's connection strings

It will then use the rule's primary key to create a SAS token using SharedAccessSignatureTokenProvider.GetSharedAccessSignature method.

internal class TokenService : ITokenService
{
    private readonly IConfiguration configuration;

    public TokenService(IConfiguration configuration)
    {
        this.configuration = configuration;
    }

    public Task<string> GetReadSharedAccessSignature()
    {
        var ruleName = configuration.Find("ReadAuthorizationRuleName");
        return GetSharedAccessSignature(ruleName);
    }

    public Task<string> GetWriteSharedAccessSignature()
    {
        var ruleName = configuration.Find("WriteAuthorizationRuleName");
        return GetSharedAccessSignature(ruleName);
    }

    private async Task<string> GetSharedAccessSignature(string ruleName)
    {
        var queueName = configuration.Find("QueueName");

        var manager = NamespaceManager.CreateFromConnectionString(configuration.Find("ServiceBusConnectionString"));
        var description = await manager.GetQueueAsync(queueName);

        SharedAccessAuthorizationRule rule;
        if (!description.Authorization.TryGetSharedAccessAuthorizationRule(ruleName, out rule))
            throw new Exception($"Authorization rule {ruleName} was not found");

        var address = ServiceBusEnvironment.CreateServiceUri("sb", configuration.Find("Namespace"), string.Empty);
        var queueAddress = address + queueName;

        return SharedAccessSignatureTokenProvider.GetSharedAccessSignature(ruleName, rule.PrimaryKey, queueAddress,
            TimeSpan.FromSeconds(int.Parse(configuration.Find("SignatureExpiration"))));
    }
}

The POC token service doesn't require any authentication, in real world of course you need to use control access to it.

Our POC will also contain a rotation routine implemented as a scheduled web job that would rotate encryption keys of both Read and Write rules on configurable interval:

[NoAutomaticTrigger]
public static void RegenerateKey(TextWriter log)
{
    var manager = NamespaceManager.CreateFromConnectionString(ConfigurationManager.AppSettings["ServiceBusConnectionString"]);
    var description = manager.GetQueue(ConfigurationManager.AppSettings["QueueName"]);

    RegenerateKey(description, ConfigurationManager.AppSettings["ReadAuthorizationRuleName"], log);
    RegenerateKey(description, ConfigurationManager.AppSettings["WriteAuthorizationRuleName"], log);

    manager.UpdateQueue(description);
}

private static void RegenerateKey(QueueDescription description, string ruleName, TextWriter log)
{
    SharedAccessAuthorizationRule rule;
    if (!description.Authorization.TryGetSharedAccessAuthorizationRule(ruleName, out rule))
        throw new Exception($"Authorization rule {ruleName} was not found");

    rule.SecondaryKey = rule.PrimaryKey;
    rule.PrimaryKey = SharedAccessAuthorizationRule.GenerateRandomKey();

    log.WriteLine($"Authorization rule: {ruleName}\nPrimary key: {rule.PrimaryKey}\nSecondary key: {rule.SecondaryKey}");
}

Let's create a console sender application that will request SAS tokens from the token service and use them to (well you guessed it) send messages to the queue:

class Program
{
    static void Main(string[] args)
    {
        Console.WriteLine("Press Ctrl+C to exit.");
        SendMessages().Wait();
    }

    private static async Task SendMessages()
    {
        var client = await GetQueueClientAsync();
        while (true)
        {
            try
            {
                var message = new BrokeredMessage(Guid.NewGuid());
                await client.SendAsync(message);

                Console.WriteLine("{0} Sent {1}", DateTime.Now, message.GetBody<Guid>());
            }
            catch(UnauthorizedAccessException e)
            {
                Console.WriteLine(e.Message);
                client = await GetQueueClientAsync();
            }

            await Task.Delay(TimeSpan.FromSeconds(8));
        }
    }

    private static async Task<QueueClient> GetQueueClientAsync()
    {
        var sharedAccessSignature = await GetTokenAsync();

        var address = ServiceBusEnvironment
            .CreateServiceUri("sb", ConfigurationManager.AppSettings["Namespace"], string.Empty);
        var messagingFactory = MessagingFactory
            .Create(address, TokenProvider.CreateSharedAccessSignatureTokenProvider(sharedAccessSignature));
        return messagingFactory.CreateQueueClient(ConfigurationManager.AppSettings["QueueName"]);
    }

    private static async Task<string> GetTokenAsync()
    {
        var client = new HttpClient();
        client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));

        var response = await client.GetStringAsync(ConfigurationManager.AppSettings["WriteTokenUrl"]);
        var jObject = JObject.Parse(response);
        return jObject.GetValue("SharedAccessSignature").ToString();
    }
}

We use MessagingFactory to construct a QueueClient instance as it has an overload accepting a token provider. The sender will keep sending messages until it gets UnauthorizedAccessException which could be due an expired token or due to updated encryption keys in the Read policy.

Sender output

You can actually differentiate these two situations. When a SAS token expires you get an error like:

40105: Malformed authorization token. TrackingId:b74dd921-eada-421e-8567-e5265effcbc9_G11,TimeStamp:10/21/2015 4:03:44 PM

When a signature is no longer accepted the error reads:

40103: Invalid authorization token signature. TrackingId:b577b054-18ff-4681-9f44-5b0b33b6f8ea_G17,TimeStamp:10/21/2015 4:05:54 PM

Our POC token service sets expiration period to 60 seconds however my testing showed that tokens start being rejected by Service Bus as expired only in 5-6 minutes. When you rotate encryption keys twice tokens get rejected with error 40103 immediately.

Scheduling web jobs in Basic tier web apps

$
0
0

You have an application application that is deployed to an Azure Web App running in Basic App Service hosting plan. You have a couple of web jobs there that are supposed to run on schedule and you chose to define the schedules with cron expressions. One day you noticed that these schedules never fired even though you remembered how you tested them on the Free plan and they seemed to work. You have Always On enabled as you want Kudu to always run to be able to trigger your scheduled jobs.

You check the state of your scheduled job with the following GET request:

GET https://{website}.scm.azurewebsites.net/api/triggeredwebjobs/{jobname} HTTP/1.1
Authorization: Basic <your deployment credentials>

The response may look something like this:

HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8

{
     "latest_run": null,"history_url": "https://{website}.scm.azurewebsites.net/api/triggeredwebjobs/{jobname}/history","scheduler_logs_url": "https://{website}.scm.azurewebsites.net/vfs/data/jobs/triggered/{jobname}/job_scheduler.log","name": "TestJob","run_command": "{jobexecutable}.exe","url": "https://{website}.scm.azurewebsites.net/api/triggeredwebjobs/{jobname}","extra_info_url": "https://{website}.scm.azurewebsites.net/azurejobs/#/jobs/triggered/{jobname}","type": "triggered","error": null,"using_sdk": true,"settings": {"schedule": "0 0 2 * * *"
     }
}

If you triggered the task before, for instance manually, the 'latest_run' property will contain a state object of the last run attempt including a URL to the output log. You see that the schedule has been picked up and you expect the task to run at 2AM every day. Once you have checked out the scheduler logs:

GET https://{website}.scm.azurewebsites.net/vfs/data/jobs/triggered/{jobname}/job_scheduler.log HTTP/1.1
Authorization: Basic <your deployment credentials>

HTTP/1.1 200 OK
Content-Type: application/octet-stream

[10/12/2015 06:18:27 > bfef75: SYS INFO] 'Basic' tier website doesn`t support scheduled WebJob.
[10/20/2015 05:10:10 > 2613fc: SYS INFO] 'Basic' tier website doesn`t support scheduled WebJob.
[10/20/2015 14:50:35 > 2613fc: SYS INFO] 'Basic' tier website doesn`t support scheduled WebJob.
[10/21/2015 02:32:37 > 882beb: SYS INFO] 'Basic' tier website doesn`t support scheduled WebJob.

I bet you’re ready to exclaim…

WTF?

It has to be some marketing trick! Technically all you need is Always On which is supported on Basic and Standard tiers as you run a dedicated resource (read VM, well at least a separate application pool) and it's possible to configure the start mode of your application pool to AlwaysRunning.

Now why did it work on the Free tier? Perhaps they wanted to give you a taste of it or enable you to use the Free plan for development and testing activities and then you were supposed move to Standard for the show time. But why can't you use a perfectly valid Basic plan option when your application isn't that big and you can save a few bucks?

Using an external trigger

When you deploy from VS Microsoft.Web.WebJobs.Publish package sets up a free Azure Scheduler collection and adds scheduler jobs for your scheduled web jobs so they are triggered externally. In fact, you can trigger an On-Demand or a scheduled task with the following POST request (note that both On-Demand and scheduled jobs are referred to as triggered in terms of Kudu):

POST https://{website}.scm.azurewebsites.net/api/triggeredwebjobs/{jobname}/run HTTP/1.1
Authorization: Basic <your deployment credentials>

HTTP/1.1 202 Accepted

You have to authenticate when making requests to Kudu. When you run them in the browser it will send your cookie but you can also use Basic authentication to run them from anywhere else. You need to use your deployment credentials which are associated with your live account. Now this is another twisted thing, please see this article for the explanation. The twisted part is that you define or reset your deployment credentials on one of your web app's settings blade but it's going to work with all apps as it's an account-wide setting.

Deployment credentials reset blade

Once you have your credentials all you need is to Base64 encode them:

var credentials ="username:password";
var bytes = Encoding.UTF8.GetBytes(credentials);
var value = Convert.ToBase64String(bytes);

You are now ready to set up an external trigger preferably with an automation option that is supported by your chosen scheduling system. You can use external solutions or you can choose Azure Scheduler.

One important thing to note about Azure Scheduler is that there can be only one free collection per Azure subscription that can contain up to 5 jobs. Pretty limiting but still can be used for your smaller applications. Another thing to note is that although Azure Scheduler supports a whole bunch of outbound authentication options for your jobs they are not supported on the Free tier.

But how does it work when deploying from VS?

Yes, it's another twist. If you try to set up a job on the portal input controls for Basic authentication credentials will be grayed out:

Grayed out input controls for Basic credentials

But you can still define the Authorization header directly:

Custom job HTTP headers

Is it a bug? Or a temporary workaround? Whatever it is you may also want to automate your schedule creation and you can do that with PowerShell:

New-AzureSchedulerJobCollection -JobCollectionName "TestCollection" -Location "West Europe" -Plan "Free"
$headers = @{"Authorization" = "Basic {your deployment credentials}"}
New-AzureSchedulerHttpJob -JobCollectionName "TestCollection" -JobName "TestJob" -Location "West Europe" -Method "POST" -URI https://{website}.scm.azurewebsites.net/api/triggeredwebjobs/{jobname}/run -Headers $headers -Frequency "Day" -Interval 1

It's going to create a job in a free collection that will run once a day.

Presenting Application Insights at Belarus Azure Day 2015

$
0
0

On December, 13 2015 we’ve held a whole day live event dedicated to all things Azure. 8 speakers from Belarus, Ukraine and Russia presented on a wide array of topics from working efficiently with Azure storage and Service Bus to trendy container and microservices technologies to usage analytics with Application Insights and Mobile Engagement.

Belarus Azure Day 2015

Here’s the full list of topics and presenters:

I was talking about Application Insights and how it can help us get a better understanding of what's happing in our applications and how they are used. I demo'ed different kinds of events and correlation between them, built-in and custom metrics, client side and usage analytics and availability tests. I also covered integration aspects and how we can send our applications logs and traces to Application Insights so they become part of the whole telemetry that is gathered, analyzed and presented by the service.

Push notification flow with Azure Notification Hubs

$
0
0

One of the commonly expected features of mobile apps is an ability to receive push notifications, that is, notifications that do not require the apps to be up and running and having an established connection with their backend. Also if you have an app, chances are, you have it for more than one platform. Whatever the platform it is, a general push notification flow is relatively the same:

  • A mobile app contacts its native PNS to retrieve its handle;
  • Then it sends the handle to its backend;
  • The backend persists the handle for later usage;
  • When it needs to send a push notification the backend contacts the PNS using the handle to target a specific mobile app instance;
  • The PNS forwards the notification to the device specified by the handle.

A Notification Hub stands in between your backend and the PNS. It is a broker that gives you a common API that abstracts your backend from communication details with a particular PNS. But that wouldn't be enough to make you want to use it. What makes Notification Hubs really useful is that they enable flexible addressing schemas and allow you to send messages to different platforms literally with a single call. This is achieved through maintaining a registry of subscriptions (PNS handles) and associated tags and templates.

Let's have a look at an example.

You are developing a mobile client app for a popular social network. The app should be able to notify users when people respond to their posts or comments or when someone follows them. Users want to be able to opt in or out of each type of notification. A single user usually has more than one device and she may set up different notification types on each device.

When registering apps with a Notification Hub you provide a set of tags that will allow you to target future notifications to particular app installations:

(Android device):
UserId:1
Event:Follow
Event:Comment

(Windows device):
UserId:1
Event:Comment

Let's say someone comments on a post of this user and you want to deliver this notification to all user's devices where she subscribed to this type of event:

var tagExpression = "UserId:1 && event:Comment";
var notification = new Dictionary<string, string> { { "message", "Someone commented on your post!" } };
await hub.SendTemplateNotificationAsync(notification, tagExpression);

Notice the tagExpression where you combine a set of tags that will be evaluated by the Notification Hub in order to determine a list of native push notification services and handles to be used to dispatch the message. In our case each of the user's devices will receive a notification as registrations from both of these devices happen to have the same set of tags. You can read up more on routing with tags and tag expressions here.

What's this dictionary that we used as a notification payload? The dictionary contains values for placeholders that you define in platform specific templates.

On a Windows device a template may look something like this:

<toast><visual><binding template="ToastText01"><text id="1">$(message)</text></binding></visual></toast>

On an iOS device the template may look like this:

{"aps":{"alert":"$(message)"}}

You define template when you register application installation with a Notification Hub. You can read more on templates here.

PNS handle, registration ID and application ID

App registration with a Notification Hub can be done directly from the client but I believe in all but very simple cases you will do it from the backend. The reason for that is that the backend needs to know the addressing schema and thus it has to control tags that are used during registration. There are 4 distinct activities that are related to setting up and sending push notifications: setting up a push notification channel, subscribing to topics, handling sign-off and dispatching notifications. In the remaining of this post I'm going to describe each one of them but before I continue I'd like to talk a little bit about app identification.

PNS handles have limited life span and it's the responsibility of mobile apps to refresh them by requesting new handles from their corresponding native PNS. Notification Hubs should be able to distinguish between different devices but it cannot be done with just PNS handles as they are transient. To solve this problem, Notification Hubs generate long living registration IDs that should be persisted so that we can refer to devices’ registration each time we need to update PNS handles, tags or templates.

Now the issue with registration IDs is that they are also transient. This is done on purpose to facilitate cleanup of mobile app instances that didn't properly unregister when they were uninstalled. For us it means that at some point registrations can expire and we should not use Notification Hubs as the only storage for registration details. A need for a local registry arises.

The local registry will contain all of the information we need to recreate (or update) a Notification Hub registration. This will include registration ID, PNS handle and a bunch of app specific tags.

Think of a sign-off scenario. When the user signs off you want to remove the registration so that no notifications are sent to this device anymore. When she signs back in you probably want to restore the registration. You will use a new PNS handle but you want to re-enable the user's subscriptions.

We need a constant ID for app installation so we can re-associate the app instance with its existing device record in the local registry. This application ID will be generated by the mobile app and will be unique per app installation across mobile platforms. It should be generated when the app is installed and should survive sign off/sign in activities.

The backend may also add application ID as a tag during registration with a Notification Hub. This will enable targeting a specific device by its ID.

Registering a channel

A 'channel' may sound somewhat fuzzy but what I mean here is a process to enable push notifications for an app instance. It's not about subscribing to app specific events but rather about performing all of the registration steps that are necessary to make an app instance push-capable. These steps should normally be carried out when the user signs in.

Registering for push notifications

A mobile app requests a handle from its native PNS and calls a register endpoint on its backend. Besides the handle the app sends its application ID and a value indicating its platform type (Android, Apple, etc). The platform type is necessary as the backend needs to use a platform specific template when registering with a Notification Hub. In fact this template selection 'switch' that will happen during registration will be the only place where we actually care about the platform.

The backend performs a registration against a Notification Hub by sending registration ID, PNS handle, platform template and a bunch of tags if they have been found in the local registry for the given application ID. If it's the first registration the backend can request a new registration ID from the Notification Hub. If it's a repeating registration the backend should try to re-use the registration ID from its local registry and be ready to handle HttpStatusCode.Gone response from the hub indicating that the registration had expired. In this case the backend should request another registration ID from the hub and retry the attempt. Have a look at some code example here.

The backend finally persists the new handle and possibly a new registration ID in the local registry.

This process is repeated when the mobile app needs to refresh its PNS handle or when the user re-signs into the app.

Subscribing to topics

This step is about updating the app’s registration when the user enables or disables a notification for an app specific event or topic. It should normally be done over a separate endpoint that your backend exposes.

Subscribing to a topic

An app specific event should be represented as a tag and this tag needs to be added to the Notification Hub registration as well as persisted in the local registry. Note that there is an alternative registration procedure called Installation. It has certain advantages over a regular registration that I describe in this post, such as partial updates, automatic installationId insertion as a tag, etc., you can find more details here). It should be noted that the workflow that I describe here pretty much covers everything you can achieve with installations.

Handling sign-off

Handling sign-off

The backend should provide an endpoint for the mobile app to unregister when the user signs off. All it needs to pass in is its application ID. The backend will be able to look up the app's registration ID in the local registry and remove its registration from the Notification Hub. The backend should keep the app's record in the local registry (mainly for tags) so it can re-create proper subscriptions the next time the user signs in on that device. This is totally optional though.

Dispatching notifications

When an event is detected by the backend it needs to construct a notification payload and create a tag expression to properly address the message. The Notification Hub will use the expression to look up native PNS handles in its registry and will actually dispatch the notification to appropriate native push notification services.


Application request routing in Azure Web Apps

$
0
0

Azure Web Apps by default enable so-called sticky sessions when subsequent requests that are made within an established session get processed by the same instance of an app that served the very first request of the session. Web Apps rely on the IIS extension called Application Request Routing (ARR) to implement that and the idea is basically to add a cookie with a server instance identifier upon the first response so that subsequent requests include the cookie and thus can indicate to ARR which server instance to route them to.

The feature is very useful when a lot of session state is loaded into memory and moving it to a distributed store is too expensive. It's also useful in scenarios when you need to quickly deploy your existing apps to Azure with little to none changes in code and/or configuration.

However if you've built your app to be stateless ARR actually limits scalability of your system. Another thing to be aware of are long sessions. Think about a user who's got a tab with your app open for a long time and when he makes another request the instance that used to serve his session has long died.

ARR in action

Let's see how ARR works by deploying a sample application to an Azure Web App running with 2 instances. We're going to use a well known MusicStore sample that allows users to buys music. Although it persists shopping carts in the database it uses in-memory session to store shopping cart identifiers. This is exactly the scenario that ARR is supposed to help with when deploying this kind of apps to web farms without making any design or code changes.

But we will make a little change for our testing purposes. We're going make the app include a custom header in each response containing an ID of the Azure Web App instance serving the request:

app.Use(next => async context =>
{
    context.Response.OnStarting(state =>
    {
        var ctx = (HttpContext)state;
        ctx.Response.Headers.Add("X-Instance-Id", Configuration["WEBSITE_INSTANCE_ID"]);

        return Task.FromResult(0);
    }, context);
    await next(context);
});

Now that we have a test app let's create a JMeter script (test plan) that would emulate a user's activity of selecting a genre and adding a few albums from that genre to his shopping cart.

JMeter test plan

I used JMeter's capability to record web tests. Once the basic scenario has been recorded you normally clean up the calls you are not interested in and add post request processors and controllers to fully implement the behavior that you need. You can download the completed test plan from here.

On step 1 the user navigates to /Store/Browse path and passes ?Genre=Rock query string parameter. The CSS extractor locates URLs to each album on the page and saves them in JMeter variables that will be used by the ForEach controller on step 2. For the first 10 albums the ForEach controller first opens an album's page and then adds the album to the shopping cart. In the end we open the cart and verify that total sum is $89.90.

Let's set the number of simultaneous users (threads) to 2 and ramp-up period to 0 or 1 second:

Thread 1:

Response headers:
HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Transfer-Encoding: chunked
Content-Type: text/html; charset=utf-8
Expires: -1
Vary: Accept-Encoding
Server: Microsoft-IIS/8.0
Set-Cookie: .AspNet.Session=03e33212-4650-93c0-0cc2-d1fa6d4f3a5a; path=/; httponly
X-Instance-Id: 1bcb92fe7c8bb579af8491a8a6da2bb9f589ffa9d2719f4f36a7d13e9b6359f3
X-Powered-By: ASP.NET
Set-Cookie: ARRAffinity=1bcb92fe7c8bb579af8491a8a6da2bb9f589ffa9d2719f4f36a7d13e9b6359f3;Path=/;Domain=musicstore2.azurewebsites.net
Date: Tue, 01 Mar 2016 12:40:26 GMT

Thread 2:

Response headers:
HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Transfer-Encoding: chunked
Content-Type: text/html; charset=utf-8
Expires: -1
Vary: Accept-Encoding
Server: Microsoft-IIS/8.0
Set-Cookie: .AspNet.Session=4854d1e5-14b8-82c4-d717-84cb954fec4d; path=/; httponly
X-Instance-Id: a58e63fe330ef44eea87d6737206e361d6d9bab12d95c822f301420c3bcf36b9
X-Powered-By: ASP.NET
Set-Cookie: ARRAffinity=a58e63fe330ef44eea87d6737206e361d6d9bab12d95c822f301420c3bcf36b9;Path=/;Domain=musicstore2.azurewebsites.net
Date: Tue, 01 Mar 2016 12:40:27 GMT

We can see that requests from each thread were processed by different instances. Upon the first request the server added two cookies: session and ARR affinity that were then resent with each subsequent request. Note that the ARR affinity cookie values are basically the same as instance ID's that we return in our custom X-Instance-Id header.

The test succeeded and both shopping carts contained expected number of items.

Disabling ARR

In order to prevent Azure Web Apps from adding the ARR affinity cookie we should add a special custom header to the response:

Arr-Disable-Session-Affinity: True

As MusicStore relies on in-memory session it will immediately break the shopping cart when running in a web farm. Let's demo it! First, let's update our middleware to add the disabling header:

app.Use(next => async context =>
{
    context.Response.OnStarting(state =>
    {
        var ctx = (HttpContext)state;
        ctx.Response.Headers.Add("X-Instance-Id", Configuration["WEBSITE_INSTANCE_ID"]);
	ctx.Response.Headers.Add("Arr-Disable-Session-Affinity", "True");

        return Task.FromResult(0);
    }, context);
    await next(context);
});
Thread 1:

Response headers:
HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Transfer-Encoding: chunked
Content-Type: text/html; charset=utf-8
Expires: -1
Vary: Accept-Encoding
Server: Microsoft-IIS/8.0
Set-Cookie: .AspNet.Session=632b8f9c-5aa1-e778-26bf-92333aa9fa49; path=/; httponly
X-Instance-Id: 1bcb92fe7c8bb579af8491a8a6da2bb9f589ffa9d2719f4f36a7d13e9b6359f3
Arr-Disable-Session-Affinity: True
X-Powered-By: ASP.NET
Date: Tue, 01 Mar 2016 12:51:10 GMT

Thread 2:

Response headers:
HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Transfer-Encoding: chunked
Content-Type: text/html; charset=utf-8
Expires: -1
Vary: Accept-Encoding
Server: Microsoft-IIS/8.0
Set-Cookie: .AspNet.Session=e4f22745-8c2a-ac36-f753-3cce9c2e2469; path=/; httponly
X-Instance-Id: a58e63fe330ef44eea87d6737206e361d6d9bab12d95c822f301420c3bcf36b9
Arr-Disable-Session-Affinity: True
X-Powered-By: ASP.NET
Date: Tue, 01 Mar 2016 12:51:48 GMT

We can see that again 2 different instances are processing requests from the test threads but there are no ARR affinity cookies any more. As a result subsequent requests get dispatched to different instances and shopping carts get filled up in an ad-hoc manner and of course in the end our test assertions fail.

JMeter failed assertions

Distributed session store to the rescue!

As we decided to scale out and disabled sticky sessions for potentially more efficient throughput we need to switch from memory to a distributed store for our session. It's pretty easy to achieve in ASP.NET Core as the session service relies on IDistributedCache implementation. The default one is a local cache that gets configured when you enable cache and session support in Startup.cs:

public void ConfigureServices(IServiceCollection services)
{
    services.AddCaching();

    services.AddSession();
}

However there are packages that provide SQL Server and Redis implementations of IDistributedCache. Let's add the Redis one to the application:

"dependencies": {"Microsoft.Extensions.Caching.Redis": "1.0.0-rc1-final"
}

Now let's remove services.AddCaching() and configure the pipeline to use Redis cache instead:

public void ConfigureServices(IServiceCollection services)
{
    services.AddRedisCache();
    services.Configure<RedisCacheOptions>(Configuration.GetSection("Redis"));

    services.AddSession();
}

For this to work we also need to add a 'Redis' section to the configuration with a property called 'Configuration' as defined in RedisCacheOptions but because the property contains a connection string to the Redis instance we should instead add an environment variable to the Web App (or a user secret when running locally):

Redis:Configuration = <InstanceName>.redis.cache.windows.net,abortConnect=false,ssl=true,password=...

Once we have redeployed and re-run our test we can see that requests are still processed by different servers within the same session but the final shopping carts contain expected items.

Generating clients for your APIs with AutoRest

$
0
0

When building Web APIs it's often required to provide client adapters between various programming stacks and raw HTTP REST APIs. These 'clients' can be built manually but it's often a rather tedious task and it adds to your development efforts as you need to keep the clients in sync with your services as you evolve them.

There had to be a better way and in fact Microsoft faced this issue when they had to generate clients for various Azure REST APIs to be used in various stacks such as .NET, Node, Ruby, Java and Python. They've created and open sourced a tool called AutoRest that can generate client side code from the Swagger document describing your service. Let's have a look!

Swagger

Remember WSDL? Swagger is something that has taken its place in the RESTful world. It's a spec for the JSON document describing your REST APIs including paths (resources), operations (verbs), parameters and responses and of course representations. Currently it's at version 2.0 and is being widely adopted as it enables interoperability between various services and software stacks.

Enabling Swagger doc in ASP.NET Core

For ASP.NET Web API the most popular library that brings Swagger documentation has been Swashbuckle. It registers an endpoint that triggers generation of the document off of the running services. Internally it relies on reflection, API description services, custom attributes and filters and even XML comments. The end result is a JSON document that complies with the Swagger spec. Swashbuckle is pretty extensible and allows you to affect the way literally any portion of the document will look like so long as it's still within the spec.

There is a work-in-progress version of Swashbuckle for ASP.NET Core and its package is available through NuGet. Once you have installed the Swashbuckle.SwaggerGen package it's time to configure the generator.

public void ConfigureServices(IServiceCollection services)
{
    services.AddSwaggerGen();
    services.ConfigureSwaggerSchema(options =>
    {
        options.DescribeAllEnumsAsStrings = true;
    });

    services.ConfigureSwaggerDocument(options =>
    {
        options.SingleApiVersion(new Swashbuckle.SwaggerGen.Info
                                 {
                                     Title = "Book Fast API",
                                     Version = "v1"
                                 });
    });
}

ConfigureSwaggerSchema, among other properties, allows you to register model filters which you can use to adjust the way documentation is generated for your representations. `ConfigureSwaggerDocument' allows you to register operation and document filters that will fine tune documentation of individual operations or even the whole document. Model, operation and document filters are the main extensibility points of Swashbuckle.

In our case we just provided a short description of the API and also specified that we want enums to be documented rather than their values.

Now we have to add a Swashbuckle middleware to the request pipeline that will handle requests to a special configurable documentation endpoint:

public void Configure(IApplicationBuilder app)
{
    app.UseIISPlatformHandler();
    app.UseMvc();

    app.UseSwaggerGen("docs/{apiVersion}");
}

If we don't specify the route Swashbuckle will use the default swagger/{apiVersion}/swagger.json.

If you launch the app and hit the specified route we should get a JSON document in response. It's a valid Swagger 2.0 document albeit not ideal. Things to watch out for:

  • Operation identifiers are quite ugly as they are formed by concatenating your controller and action names together with HTTP verbs and parameters. AutoRest uses operation identifiers to derive method names for your client interfaces so you want to make sure you control these identifiers.
  • All responses include default 200 only even though your actions may return 201 or 204 as success code and chance are they can produce some 40x.
  • If you return IActionResult rather than an actual representation the response won't contain a reference to the corresponding schema. And you will retrun IActionResult from at least your POST and DELETE methods.
  • produces properties of the operations are empty and you probably want to include content types that your API supports (e.g. application/json).
  • Parameters and properties in your representations are lacking descriptions and while this may not be such an issue for you, wouldn't it be nice if those descriptions were included as XML comments in generated classes?

Here's what a POST operation from my BookingController would look like:

"/api/accommodations/{accommodationId}/bookings": {"post": {"tags": ["Booking"],"operationId": "ApiAccommodationsByAccommodationIdBookingsPost","produces": [],"parameters": [{"name": "accommodationId","in": "path","required": true,"type": "string"
		},
		{"name": "bookingData","in": "body","required": false,"schema": {"$ref": "#/definitions/BookingData"
			}
		}],"responses": {"200": {"description": "OK"
			}
		},"deprecated": false
	}
}

Let's fix these issues!

Getting better documentation with Swashbuckle attributes and filters

Remember that AddSwaggerGen call? Beyond anything else it registers default operation filters that will handle special Swashbuckle attributes that you can use to control operation identifiers and responses. The attributes are: SwaggerOperation, SwaggerResponse and SwaggerResponseRemoveDefaults.

Let's have a look at what our POST method could look like once decorated with aforementioned attributes:

[HttpPost("api/accommodations/{accommodationId}/bookings")]
[SwaggerOperation("create-booking")]
[SwaggerResponseRemoveDefaults]
[SwaggerResponse(System.Net.HttpStatusCode.Created, Type = typeof(BookingRepresentation))]
[SwaggerResponse(System.Net.HttpStatusCode.BadRequest, Description = "Invalid parameters")]
[SwaggerResponse(System.Net.HttpStatusCode.NotFound, Description = "Accommodation not found")]
public async Task<IActionResult> Create([FromRoute]Guid accommodationId, [FromBody]BookingData bookingData)
{
    try
    {
        if (ModelState.IsValid)
        {
            var booking = await service.BookAsync(accommodationId, mapper.MapFrom(bookingData));
            return CreatedAtAction("Find", mapper.MapFrom(booking));
        }

        return HttpBadRequest();
    }
    catch (AccommodationNotFoundException)
    {
        return HttpNotFound();
    }
}

Even though I've chosen a dash style for my operations identifiers (i.e. create-booking) AutoRest will actually generate a method called CreateBooking in my client interface which is very nice! I also specified that upon success the operation will return 201 and the Swagger document should include a reference to BookingRepresentation in the 201 response. I had to remove the default 200 response with SwaggerResponseRemoveDefaults attribute.

I also included a 404 response with an appropriate description. Please note that HTTP status codes are actually keys in the dictionary of responses within an operation and thus there can be only one response with a particular status code. If you have multiple 404's you will need to come up with a combined description in SwaggerResponse attribute.

So far so good but let's address the missing content type issue. One way to do that is to add a custom operation filter that will add supported content types to all of our operations:

internal class DefaultContentTypeOperationFilter : IOperationFilter
{
    public void Apply(Operation operation, OperationFilterContext context)
    {
        operation.Produces.Clear();
        operation.Produces.Add("application/json");
    }
}

As it was mentioned above operation filters are added in ConfigureSwaggerDocument so let's do that:

services.ConfigureSwaggerDocument(options =>
{
    options.SingleApiVersion(new Swashbuckle.SwaggerGen.Info
                             {
                                 Title = "Book Fast API",
                                 Version = "v1"
                             });
    options.OperationFilter<DefaultContentTypeOperationFilter>();
});

Getting even better documentation with XML comments

Swashbuckle can also extract XML comments that you can add to your action methods as well as to models. XML comments are extracted by default but you need to enable emission of build artifacts in by going to your MVC project's Properties and selecting 'Produce output on build' option on the Build page.

ASP.NET Core app build properties page

By default the artifacts (.dll, .pdb and the desired .xml) will be put into 'artifacts' folder in your solution under corresponding project, build configuration and framework type folders. When you publish and choose to create NuGet packages for your code the artifacts will be in approot\packages{YourProjectName}{PackageVersion}\lib{FrameworkType} folder. Why is this important? Because you need to provide a path to the XML file to Swashbuckle and with ASP.NET Core these paths are going to be different depending on whether you just locally build or publish.

This configuration code will work with local builds but not with published apps and it has to be used in development environment only. Moreover it's not compatible with RC2 bits of ASP.NET Core. But we seem to be moving away from the topic of this post.

Anyway, once we have decorated our code with nice XML comments let's have a look at the final version for the POST Booking operation documentation:

"/api/accommodations/{accommodationId}/bookings": {"post": {"tags": ["Booking"],"summary": "Book an accommodation","operationId": "create-booking","produces": ["application/json"],"parameters": [{"name": "accommodationId","in": "path","description": "Accommodation ID","required": true,"type": "string"
		},
		{"name": "bookingData","in": "body","description": "Booking details","required": false,"schema": {"$ref": "#/definitions/BookingData"
			}
		}],"responses": {"201": {"description": "Created","schema": {"$ref": "#/definitions/BookingRepresentation"
				}
			},"400": {"description": "Invalid parameters"
			},"404": {"description": "Accommodation not found"
			}
		},"deprecated": false
	}
}

Now we're talking! Much better than the initial version. Let's go generate the client!

AutoRest

You can install AutoRest with Chocolatey or simply grab a package from NuGet and unpack it somewhere. Then you need to request a Swagger document from your service and save it. Now you're ready to run AutoRest:

f:\dev\tools\AutoRest>AutoRest.exe -Namespace BookFast.Client -CodeGenerator CSharp -Modeler Swagger -Input f:\book-fast-swagger.json -PackageName BookFast.Client -AddCredentials true

The Microsoft.Rest.ClientRuntime.2.1.0 nuget package is required to compile the
generated code.
Finished generating CSharp code for f:\book-fast-swagger.json.

Here you can find a complete documentation for command line parameters. I chose C# generator but AutoRest also supports Java, Node, Python and Ruby.

In order to build the generated code you also need to add Microsoft.Rest.ClientRuntime NuGet package that brings all the necessary plumbing.

Exploring generated client code

AutoRest generated classed for my representations together with IBookFastAPI interface and the corresponding implementation class. All operations are declared as asynchronous and I can also control Json.NET serializer settings. Let's have a look at the POST Booking contract:

/// <summary>
/// Book an accommodation
/// </summary>
/// <param name='accommodationId'>
/// Accommodation ID
/// </param>
/// <param name='bookingData'>
/// Booking details
/// </param>
/// <param name='customHeaders'>
/// The headers that will be added to request.
/// </param>
/// <param name='cancellationToken'>
/// The cancellation token.
/// </param>
Task<HttpOperationResponse<BookingRepresentation>> CreateBookingWithHttpMessagesAsync(
    string accommodationId,
    BookingData bookingData = default(BookingData),
    Dictionary<string, List<string>> customHeaders = null,
    CancellationToken cancellationToken = default(CancellationToken));

The interface allows me to provide custom headers and cancellation tokens for each operation. Nice! Also notice the XML comments, some of them (summary, API parameters) are coming from the Swagger document. XML comments are also added to generated models.

The implementation handles all the nitty gritty details of constructing the request and handling the response. Note that it respects response codes that we insured to be present in our Swagger doc:

// sending request is omitted

HttpStatusCode _statusCode = _httpResponse.StatusCode;
cancellationToken.ThrowIfCancellationRequested();
string _responseContent = null;

if ((int)_statusCode != 201 && (int)_statusCode != 400 && (int)_statusCode != 404)
{
    var ex = new HttpOperationException(string.Format("Operation returned an invalid status code '{0}'", _statusCode));
    ex.Request = new HttpRequestMessageWrapper(_httpRequest, _requestContent);
    ex.Response = new HttpResponseMessageWrapper(_httpResponse, _responseContent);
    if (_shouldTrace)
    {
        ServiceClientTracing.Error(_invocationId, ex);
    }
    _httpRequest.Dispose();
    if (_httpResponse != null)
    {
        _httpResponse.Dispose();
    }
    throw ex;
}

// Create Result
var _result = new HttpOperationResponse<BookingRepresentation>();
_result.Request = _httpRequest;
_result.Response = _httpResponse;

// Deserialize Response
if ((int)_statusCode == 201)
{
    _responseContent = await _httpResponse.Content.ReadAsStringAsync().ConfigureAwait(false);
    try
    {
        _result.Body = SafeJsonConvert.DeserializeObject<BookingRepresentation>(_responseContent, this.DeserializationSettings);
    }
    catch (JsonException ex)
    {
        _httpRequest.Dispose();
        if (_httpResponse != null)
        {
            _httpResponse.Dispose();
        }
        throw new SerializationException("Unable to deserialize the response.", _responseContent, ex);
    }
}

if (_shouldTrace)
{
    ServiceClientTracing.Exit(_invocationId, _result);
}

return _result;

If the response contains anything besides expected 201, 400 or 404 it will throw as the service is behaving in an undocumented way. Note that the method returns HttpOperationResponse that may or may not contain the actual payload. It is your responsibility to check for documented 40x responses.

Authentication

Most APIs require authentication of some kind and because we used -AddCredentials true command line option AutoRest generated a special version of the client for us that allows us to provide credentials.

var credentials = new TokenCredentials("<bearer token>");

var client = new BookFast.Client.BookFastAPI(new Uri("http://localhost:50960", UriKind.Absolute), credentials);
var result = await client.CreateBookingWithHttpMessagesAsync("12345", new BookFast.Client.Models.BookingData
             {
                 FromDate = DateTime.Parse("2016-05-01"),
                 ToDate = DateTime.Parse("2016-05-08")
             });

Microsoft.Rest.ClientRuntime provides two variants of credentials that can be passed to the constructor of our client: TokenCredentials and BasicAuthenticationCredentials. If you use a custom authentication mechanism you can create your own implementation of ServiceClientCredentials. Its job is to add necessary details to the request object before it will be sent over the wire.

Do you guys still manually write clients for your APIs?

Protecting your APIs with Azure Active Directory

$
0
0

When building web APIs you inevitably have to decide on your security strategy. When making this important decision you want to go with a solution that is rock solid, scales well and enables modern work flows for users accessing your APIs from variety of devices as well as for other systems and components that may take advantage of integrating with your APIs. Azure Active Directory is a great SAAS offering that hits the spot when considering these factors.

In this post I'm going to demonstrate how you can quickly protect your ASP.NET Core based APIs with Azure AD. I won't go into much detail on AD internals and configuration tweaks to keep this post sane and in control but I'm planning a series of posts to dive deep into these topics.

Creating API application in Azure AD

I'm going to be using my Book Fast API sample playground app and I want to protect it with Bearer tokens issued by Azure AD.

For an application to be recognized and protected by Azure AD it needs to be registered in it as, well, an application. That is true both for your APIs as well as your consuming apps. Let's go to the Active Directory section on the portal. You still get redirected to the classic portal to manage your AD tenants. On the 'Applications' tab you can choose to create a new app that 'your organization is developing'. You need to provide 4 things:

  1. App name, obviously. I'm going to use 'book-fast-api'.
  2. App type. In our case it's 'Web application and/or Web API'.
  3. Sign-on URL. This is not important for API apps.
  4. App ID URI. This is an important setting that uniquely defines you application. It will also be the value of the 'resource' that consumers will request access tokens for. It has to be a valid URI and you normally use your tenant address as part of it. My test tenant is 'devunleashed.onmicrosoft.com' so I set the app ID URI to 'https://devunleashed.onmicrosoft.com/book-fast-api'.

New Azure AD dialog

That's it. We have just created the app that can be accessed by other apps on behalf of their users. This is an important point! Azure AD by default configures apps so that they provide a delegated permission for other apps to access them on behalf of the signed in user.

See that 'Manage manifest' button at the bottom of the portal page of your application? Click it and choose to download the manifest.

"oauth2Permissions": [{"adminConsentDescription": "Allow the application to access book-fast-api on behalf of the signed-in user.","adminConsentDisplayName": "Access book-fast-api","id": "60260462-0895-4c20-91da-2b417a0bd41c","isEnabled": true,"type": "User","userConsentDescription": "Allow the application to access book-fast-api on your behalf.","userConsentDisplayName": "Access book-fast-api","value": "user_impersonation"
}]

oauth2Permissions collection defines delegated permissions your app provides to other apps. We will get back to assigning this permission to a client application later in this post but for now let's go to Visual Studio and enable Bearer authentication in the ASP.NET Core project containing our APIs.

Enabling Bearer authentication in ASP.NET Core

There are a bunch of authentication middleware packages available for various scenarios and the one we need in our case is Microsoft.AspNet.Authentication.JwtBearer.

"dependencies": {"Microsoft.AspNet.Authentication.JwtBearer": "1.0.0-rc1-final"
}

Looking at the package name you probably have guessed that it understands JSON Web Tokens. In fact, OAuth2 spec doesn't prescribe the format for access tokens.

Access tokens can have different formats, structures, and methods of utilization (e.g., cryptographic properties) based on the resource server security requirements.

Azure AD uses JWT for its access tokens that are obtained from OAuth2 token endpoints and thus this package is exactly what we need.

Once we've added the package we need to configure the authentication middleware.

public void ConfigureServices(IServiceCollection services)
{
    services.Configure<AuthenticationOptions>(configuration.GetSection("Authentication:AzureAd"));
}

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory, IOptions<AuthenticationOptions> authOptions)
{
    loggerFactory.AddConsole(Configuration.GetSection("Logging"));
    loggerFactory.AddDebug();

    app.UseIISPlatformHandler();
    app.UseJwtBearerAuthentication(options =>
                                   {
                                       options.AutomaticAuthenticate = true;
                                       options.AutomaticChallenge = true;
                                       options.Authority = authOptions.Value.Authority;
                                       options.Audience = authOptions.Value.Audience;
                                   });
    app.UseMvc();
}

AutomaticAuthenticate flag tells the middleware to look for the Bearer token in the headers of incoming requests and, if one is found, validate it. If validation is successful the middleware will populate the current ClaimsPrincipal associated with the request with claims (and potentially roles) obtained from the token. It will also mark the current identity as authenticated.

AutomaticChallenge flag tells the middleware to modify 401 responses that are coming from further middleware (MVC) and add appropriate challenge behavior. In case of Bearer authentication it's about adding the following header to the response:

HTTP/1.1 401 Unauthorized
WWW-Authenticate: Bearer

Authority option defines the tenant URL in Azure AD that issued the token. It consists of two parts: Azure AD instance URL, in my case this is 'https://login.microsoftonline.com/' and tenant ID which is a GUID that you can look up by opening the 'View endpoints' dialog on the portal. Alternately, you can also use a domain based tenant identifier which normally in the form of '.onmicrosoft.com' but Azure AD also allows you to assign custom domains to your tenants. So in my case I could either use 'https://login.microsoftonline.com/70005c1f-ea47-488e-8f57-c3543485f1d0' or 'https://login.microsoftonline.com/devunleashed.onmicrosoft.com'.

In order to validate the token, JwtBearerMiddleware actually relies on OpenID Connect metadata endpoints provided by the authority to get details on encryption keys and algorithms that were used to sign the token. Even though I'm trying to stay with bare bones OAuth2 in this post it's worth mentioning that OpenID Connect solves many of the concerns that are not covered (defined) in OAuth2 spec and the existing middleware takes advantage of it. Azure AD of course fully supports it but this is a topic for another post.

The final important option to set is Audience. When issuing access tokens Azure AD requires the callers to provide a resource name (or intended audience) that they want to access using the token. This intended audience will be included as a claim in the token and will be verified by JwtBearerMiddleware when validating the token. When we created an application for Book Fast API we provided App ID URI (https://devunleashed.onmicrosoft.com/book-fast-api) which we will use as the resource identifier.

That's basically it. The way you enforce authentication on your MVC controllers and/or actions is a good old AuthorizeAttribute that will return 401 if the current principal is not authenticated.

Handling authentication errors

What should happen when an invalid or expired token has been provided? Ideally the middleware should trigger the same challenge flow as if no token was provided. The middleware allows you to handle authentication failure situations by providing an OnAuthenticationFailed callback method in JwtBearerEvents object which is part of JwtBearerOptions that we have just configured above.

Unfortunately, RC1 version of Microsoft.AspNet.Authentication.JwtBearer has a bug in the way it tries to handle our decision that we make in the OnAuthenticationFailed. No matter if we choose to HandleResponse or SkipToNextMiddleware it will try to instantiate a successful AuthenticationResult with no authentication ticket and of course this idea is not going to work. Looking at the dev branch I see there has been some refactoring in the way that the authentication events are handled and hopefully the issue has been resolved.

In the meantime I've created a fixed version of the middleware targeting RC1 that allows you to skip to the next middleware if token validation fails which will allow the processing to hit the AuthorizeAttribute and retrigger the automatic challenge on 401:

var jwtBearerOptions = new JwtBearerOptions
                       {
                           AutomaticAuthenticate = true,
                           AutomaticChallenge = true,
                           Authority = authOptions.Value.Authority,
                           Audience = authOptions.Value.Audience,

                           Events = new JwtBearerEvents
                                    {
                                        OnAuthenticationFailed = ctx =>
                                                                 {
                                                                     ctx.SkipToNextMiddleware();
                                                                     return Task.FromResult(0);
                                                                 }
                                    }
                       };
app.UseMiddleware<CustomJwtBearerMiddleware>(jwtBearerOptions);

Alternately, we could call ctx.HandleResponse() and construct the challenge response ourselves to avoid hitting MVC middleware. But I prefer my version as it will allow calls with invalid tokens to endpoints that don't require authentication and/or authorization. In fact, the ultimate decision on whether the caller should be challenged or not should be made by the authorization filters.

OAuth2 Client Credentials Grant flow

I can't finish this post without demonstrating a client application calling our protected API. OAuth2 spec defines both interactive as well as non-interactive flows. Interactive flows are used in scenarios when users give their consent to client applications to access resources on their behalf and non-interactive ones imply that the client application possesses all of the credentials they need to access resources on their own.

First, I'm going to demonstrate the Client Credentials Grant flow that is used for server-to-server internal calls.

OAuth2 Client Credential Grant

This flow is meant to be used with confidential clients, i.e. clients that are running on the server as opposed to those running on user devices (which are often referred to as 'public clients'). Confidential clients provide their client ID and client secret in the requests for access tokens. The resources they ask tokens for are accessed from their application's context rather than from their user's (resource owner's) context. That makes perfect sense as there are no user credentials involved.

Provisioning a client application in Azure AD

Steps for provisioning a client app are the same as for the API app. The app type is still 'Web application and/or Web API' which indicates that we are creating a confidential client.

On the 'Configure' tab we need to create a client key (secret) Keep it safe as the portal won't display it the next time you get back to the app's page.

Hit 'Save' and let's give it a ride.

Testing Client Credentials Grant flow

First let's hit the API without any token to make sure it's guarded:

GET https://localhost:44361/api/bookings HTTP/1.1
Host: localhost:44361


HTTP/1.1 401 Unauthorized
Content-Length: 0
Server: Kestrel
WWW-Authenticate: Bearer

Let's request a token from Azure AD (don't forget to URL encode your client secret!):

POST https://login.microsoftonline.com/70005c1f-ea47-488e-8f57-c3543485f1d0/oauth2/token HTTP/1.1
Content-Type: application/x-www-form-urlencoded
Host: login.microsoftonline.com
Content-Length: 197

resource=https://devunleashed.onmicrosoft.com/book-fast-api&grant_type=client_credentials&client_id=119f1731-3fd4-4c3d-acbc-2455879b0d54&client_secret=<client secret>


HTTP/1.1 200 OK
Cache-Control: no-cache, no-store
Pragma: no-cache
Content-Type: application/json; charset=utf-8
Content-Length: 1304

{
	"token_type": "Bearer","expires_in": "3599","expires_on": "1461341991","not_before": "1461338091","resource": "https://devunleashed.onmicrosoft.com/book-fast-api","access_token": "<token value>"
}

Note that Client Credentials Grant doesn't return a refresh token because well it's useless in this case as you can always use your client credentials to request a new access token.

Let's call our API with the access token:

GET https://localhost:44361/api/bookings HTTP/1.1
Authorization: Bearer <token value>
Host: localhost:44361


HTTP/1.1 500 Internal Server Error
Content-Length: 0
Server: Kestrel

Well it failed miserably but trust me it's not related to the authentication part. The problem is that we are trying to get pending booking requests of a user and the application tries to get a user name from the current principal's claims. It's specifically looking for the claim of type 'http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name' and it can't find it. And 500 is the correct response code here because we apparently screwed up the app logic here. User booking requests are expected to be queried under user context only, not under application context.

But no, don't take my words for granted. I am actually going to prove to you that authentication succeeded. Here's the debug output:

Microsoft.AspNet.Hosting.Internal.HostingEngine: Information: Request starting HTTP/1.1 GET http://localhost:44361/api/bookings
Microsoft.AspNet.Authentication.JwtBearer.JwtBearerMiddleware: Information: HttContext.User merged via AutomaticAuthentication from authenticationScheme: Bearer.
Microsoft.AspNet.Authorization.DefaultAuthorizationService: Information: Authorization was successful for user: .
Microsoft.AspNet.Mvc.Controllers.ControllerActionInvoker: Information: Executing action method BookFast.Api.Controllers.BookingController.List with arguments () - ModelState is Valid'
...
...
Microsoft.AspNet.Server.Kestrel: Error: An unhandled exception was thrown by the application.
System.Exception: Claim 'http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name' was not found.

There is no user! It should remind us of the intended use of the Client Credentials Grant. We will try another OAuth2 flow a bit later but now let's take a break and have a look at the access token and take this opportunity to examine its content and better understand how token validation works.

Access token validation

Remember that Azure AD access tokens are JWT? And as such they consist of 2 Based64 endcoded JSON parts (header and payload) plus a signature. You can easily decode them, for example, with the Text Wizard tool in Fiddler:

Azure AD access token decoded with Text Wizard

And here's the readable part:

{"typ": "JWT","alg": "RS256","x5t": "MnC_VZcATfM5pOYiJHMba9goEKY","kid": "MnC_VZcATfM5pOYiJHMba9goEKY"
}
{"aud": "https://devunleashed.onmicrosoft.com/book-fast-api","iss": "https://sts.windows.net/70005c1f-ea47-488e-8f57-c3543485f1d0/","iat": 1461338091,"nbf": 1461338091,"exp": 1461341991,"appid": "119f1731-3fd4-4c3d-acbc-2455879b0d54","appidacr": "1","idp": "https://sts.windows.net/70005c1f-ea47-488e-8f57-c3543485f1d0/","oid": "970c6d5c-e200-481c-a134-6d0287f3c406","sub": "970c6d5c-e200-481c-a134-6d0287f3c406","tid": "70005c1f-ea47-488e-8f57-c3543485f1d0","ver": "1.0"
}

The aud claim contains the intended audience that this token was requested for. JwtBearerMiddleware will compare it with the Audience property that we set when enabling it and will reject tokens should they contain a different value for the audience.

Another important claim is iss that represents the issuer STS and it is also verified when validating the token. But what is it compared to? And how does JwtBearerMiddleware validate the token's signature after all?

The middleware we use takes advantage of OpenID Connect discovery to get the data it needs. If you trace/capture HTTP traffic on the API app side with Fiddler you will discover that the API app makes 2 calls to Azure AD when validating the token. The first call is to the discovery endpoint. It's URL is formed as '/.well-known/openid-configuration':

GET https://login.microsoftonline.com/70005c1f-ea47-488e-8f57-c3543485f1d0/.well-known/openid-configuration HTTP/1.1


HTTP/1.1 200 OK
Cache-Control: private
Content-Type: application/json; charset=utf-8
Content-Length: 1239

{
	"authorization_endpoint": "https://login.microsoftonline.com/70005c1f-ea47-488e-8f57-c3543485f1d0/oauth2/authorize","token_endpoint": "https://login.microsoftonline.com/70005c1f-ea47-488e-8f57-c3543485f1d0/oauth2/token","token_endpoint_auth_methods_supported": ["client_secret_post","private_key_jwt"],"jwks_uri": "https://login.microsoftonline.com/common/discovery/keys","response_modes_supported": ["query","fragment","form_post"],"subject_types_supported": ["pairwise"],"id_token_signing_alg_values_supported": ["RS256"],"http_logout_supported": true,"response_types_supported": ["code","id_token","code id_token","token id_token","token"],"scopes_supported": ["openid"],"issuer": "https://sts.windows.net/70005c1f-ea47-488e-8f57-c3543485f1d0/","claims_supported": ["sub","iss","aud","exp","iat","auth_time","acr","amr","nonce","email","given_name","family_name","nickname"],"microsoft_multi_refresh_token": true,"check_session_iframe": "https://login.microsoftonline.com/70005c1f-ea47-488e-8f57-c3543485f1d0/oauth2/checksession","end_session_endpoint": "https://login.microsoftonline.com/70005c1f-ea47-488e-8f57-c3543485f1d0/oauth2/logout","userinfo_endpoint": "https://login.microsoftonline.com/70005c1f-ea47-488e-8f57-c3543485f1d0/openid/userinfo"
}

Lots of metadata here including the issuer value and the jwks_uri endpoint address to get the keys to validate the token's signature:

GET https://login.microsoftonline.com/common/discovery/keys HTTP/1.1


HTTP/1.1 200 OK
Cache-Control: private
Content-Type: application/json; charset=utf-8
Content-Length: 2932

{
	"keys": [{"kty": "RSA","use": "sig","kid": "MnC_VZcATfM5pOYiJHMba9goEKY","x5t": "MnC_VZcATfM5pOYiJHMba9goEKY","n": "vIqz-4-ER_vNWLON9yv8hIYV737JQ6rCl6X...","e": "AQAB","x5c": ["<X.509 Certificate Chain>"]
	},
	{"kty": "RSA","use": "sig","kid": "YbRAQRYcE_motWVJKHrwLBbd_9s","x5t": "YbRAQRYcE_motWVJKHrwLBbd_9s","n": "vbcFrj193Gm6zeo5e2_y54Jx49sIgScv-2J...","e": "AQAB","x5c": ["<X.509 Certificate Chain>"]
	}]
}

Token signing is implemented according to JSON Web Key spec. Using Key ID and X.509 certificate thumbprint values from the token's header (kid and x5t parameters respectively) the middleware is able to find the appropriate public key in the obtained collection of keys to verify the signature.

OAuth2 Resource Owner Password Credentials Grant flow

Let's fix our 500 issue with Book Fast API and try to get a list of booking requests under a user context. OAuth2 and OpenID Connect provide interactive flows that include secure gathering of user credentials but to keep this post short I'm going to demonstrate a simpler flow called Resource Owner Credentials Grant.

When developing new applications you should not use this flow as it requires your client applications to gather user credentials. This, in turn, lays the ground for all kinds of bad practices like, for instance, a temptation to preserve the credentials in the usable form to be able to make internal calls on behalf of users. It also puts the burden of maintaining user credentials (password resets, two factor auth, etc) on your shoulders.

This flow can be used though in legacy applications that are being re-architectured (such as adopting Azure AD and delegated access to services) as an intermediate solution.

OAuth2 Resource Owner Credentials Grant

Ok, back to the 'Configure' page of the client app! We need to give it a delegated permission to call Book Fast API. Use 'Add application' button to find and add 'book-fast-api' to the list of apps and then select the delegated permission.

Giving the client a delegated permission to access book-fast-api

Note that the 'Access book-fast-api' permission is coming from the oauth2Permissions collection that we saw in the API's app manifest earlier.

If you do this under your admin account you essentially provide an admin consent for the client app to call the API app on behalf of any user of the tenant. It fits the current flow perfectly as there is no way for users to provide their consent to Active Directory as they don't go to its login pages.

Requesting a token now requires user credentials and the grant type of password:

POST https://login.microsoftonline.com/70005c1f-ea47-488e-8f57-c3543485f1d0/oauth2/token HTTP/1.1
Content-Type: application/x-www-form-urlencoded
Host: login.microsoftonline.com
Content-Length: 260

resource=https://devunleashed.onmicrosoft.com/book-fast-api&grant_type=password&client_id=119f1731-3fd4-4c3d-acbc-2455879b0d54&client_secret=<client secret>&username=newfella@devunleashed.onmicrosoft.com&password=<user password>


HTTP/1.1 200 OK
Cache-Control: no-cache, no-store
Pragma: no-cache
Content-Type: application/json; charset=utf-8
Content-Length: 2204

{
	"token_type": "Bearer","scope": "user_impersonation","expires_in": "3599","expires_on": "1461602199","not_before": "1461598299","resource": "https://devunleashed.onmicrosoft.com/book-fast-api","access_token": "<access token value>","refresh_token": "<refresh token value>"
}

Same as other delegated flows, Resource Owner Password Grant also allows for an optional refresh token to be returned from the token endpoint. This token can be used by the client to ask for new access tokens without bothering the user to re-enter her credentials.

Let's have a quick glance at the access token:

{"aud": "https://devunleashed.onmicrosoft.com/book-fast-api","iss": "https://sts.windows.net/70005c1f-ea47-488e-8f57-c3543485f1d0/","iat": 1461598299,"nbf": 1461598299,"exp": 1461602199,"acr": "1","amr": ["pwd"],"appid": "119f1731-3fd4-4c3d-acbc-2455879b0d54","appidacr": "1","ipaddr": "86.57.158.18","name": "New Fella","oid": "3ea83d38-dad6-4576-9701-9f0e153c32b5","scp": "user_impersonation","sub": "Qh3Yqwk86aMN8Oos_xCEDZcV2cfGi7PTl-5uSSgF4uE","tid": "70005c1f-ea47-488e-8f57-c3543485f1d0","unique_name": "newfella@devunleashed.onmicrosoft.com","upn": "newfella@devunleashed.onmicrosoft.com","ver": "1.0"
}

Now it contains claims mentioning my 'newfella@devunleashed.onmicrosoft.com' user and something tells me we're going to have a better luck calling the Book Fast API now!

GET https://localhost:44361/api/bookings HTTP/1.1
Authorization: Bearer <access token>


HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Server: Kestrel
Content-Length: 663

[{
	"Id": "7e63dd0c-0910-492f-a34b-a05d995455ce","AccommodationId": "2c998dc6-1b90-4ba1-9885-5169e5c83c79","AccommodationName": "Queen's dream","FacilityId": "c08ffa8d-87fa-4315-8a54-0e744b33e7f7","FacilityName": "First facility","StreetAddress": "11, Test str.","FromDate": "2016-06-10T00:00:00+03:00","ToDate": "2016-06-18T00:00:00+03:00"
},
{"Id": "4e7f165f-a1d2-48ce-9b14-d2d8d5c04750","AccommodationId": "2c998dc6-1b90-4ba1-9885-5169e5c83c79","AccommodationName": "Queen's dream","FacilityId": "c08ffa8d-87fa-4315-8a54-0e744b33e7f7","FacilityName": "First facility","StreetAddress": "11, Test str.","FromDate": "2016-05-22T00:00:00+03:00","ToDate": "2016-05-30T00:00:00+03:00"
}]

Application and user permissions in Azure AD

$
0
0

Last time we had a tour over the experience of having your APIs protected by Azure AD. In this post I'd like to dive a little deeper into how you can better control access with roles that you can assigned to users and applications.

I'm still using my BookFast API playground app and there are 2 activities that we're going to look at today:

User and application initiated activities in BookFast

  1. (shown in red) A user tries to create a new facility in the system.
  2. (shown in green) A background process tries to process a batch update request that may involve creation of new facilities and updating of the existing ones.

In both cases it makes sense to control who or what has permissions to make changes to facilities. Only users who have been assigned a role of 'FacilityOwner' can manage facilities and we want only the background processes that have been specifically assigned the 'ImporterProcess' role to be able to batch import facilities.

Implementing authorization policies in ASP.NET Core

The roles are app specific and it's a responsibility of the application to enforce them. In ASP.NET Core authorization infrastructure is coming with Microsoft.AspNet.Authorization package. This is where you're going to find familiar authorization attributes such as AuthorizeAttribute and 'AllowAnonymousAttribute`, and some really cool stuff called authorization policies. With authorization policies you have flexibility to implement permission authorization checks that better suite your applications. You can check claims, roles, user names and of course come up with your own implementations.

Let's define a 'Facility.Write' policy for BookFast API:

private static void RegisterAuthorizationPolicies(IServiceCollection services)
{
    services.AddAuthorization(
        options =>
        {
            options.AddPolicy("Facility.Write", config =>
                              {
                                  config.RequireRole(InteractorRole.FacilityProvider.ToString(), InteractorRole.ImporterProcess.ToString());
                              });
        });
}

Pretty slick, huh? We've defined a policy and added a RolesAuthorizationRequirement with two accepted roles: 'FacilityProvider' and 'ImporterProcess'. RolesAuthorizationRequirement is going to be satisfied when an incoming request's ClaimsPrinciple contains either role (it applies Any logic when handling authorization). The policy is considered satisfied when all of its requirements are satisfied and in our case there is only one requirement.

To enforce the policy we need to specify it when decorating our controllers and/or actions with AuthorizeAtribute:

[Authorize(Policy = "Facility.Write")]
public class FacilityController : Controller
{
    ...
}

A quick test

Let's get a userless token using Client Credentials Grant. I already have a client app with ID 119f1731-3fd4-4c3d-acbc-2455879b0d54 registered in Azure AD, so:

POST https://login.microsoftonline.com/70005c1f-ea47-488e-8f57-c3543485f1d0/oauth2/token HTTP/1.1
Content-Type: application/x-www-form-urlencoded

resource=https://devunleashed.onmicrosoft.com/book-fast-api&grant_type=client_credentials&client_id=119f1731-3fd4-4c3d-acbc-2455879b0d54&client_secret=<client secret>


HTTP/1.1 200 OK
Cache-Control: no-cache, no-store
Pragma: no-cache
Content-Type: application/json; charset=utf-8
Content-Length: 1296

{
    "token_type": "Bearer","scope": "user_impersonation","expires_in": "3599","expires_on": "1461873034","not_before": "1461869134","resource": "https://devunleashed.onmicrosoft.com/book-fast-api","access_token": "token value"
}

And try to invoke the protected API:

POST https://localhost:44361/api/facilities HTTP/1.1
Content-Type: application/json
Authorization: Bearer <access token>

{
    "Name": "test","StreetAddress": "test"
}


HTTP/1.1 403 Forbidden
Content-Length: 0

As expected we get a cold 403 response meaning that the token has been validated and the ClaimsPrincipal has been initialized however according to our authorization policy the principal lacks required roles.

Application level roles in Azure AD

Every application in Azure AD allows you to define app specific roles that can be assigned to users, user groups and applications. As we have already started testing the importer scenario let's assign the 'ImporterProcess' role to the client process app. But first, the role needs to be defined in the API app itself. That is, the API app exposes a bunch of its roles that can be assigned to consumers. Make sense?

When you download the manifest of the BookFast API (on the classic portal there is a button called 'Manage Manifest' at the bottom) you will see there is a collection called appRoles which is empty by default. Let's define our role:

"appRoles": [
  {"allowedMemberTypes": ["Application"
    ],"description": "Allows applications to access book-fast-api to create/update/delete facilities and accommodations","displayName": "Access book-fast-api as an importer process","id": "17a67f38-b915-40bb-bd09-228a5c8a997e","isEnabled": true,"value": "ImporterProcess"
  }
]

The properties are pretty much self-explanatory. You need to assign a unique ID to the role and decide who or what can get assigned the role. This is controlled by the allowedMemberTypes collection. In this case I want this role to only be assigned to applications, not users.

Now we need to upload the modified manifest back to the BookFast API app by using the same 'Manage manifest' button on the portal that we used to download it.

Assigning the role to the consumer app representing the importer process can be done on the portal on the 'Configure' tab of the consumer app:

Granting application level permission in Azure AD

It's worth noting that the assignment has to be done by an administrator.

Testing it out

Let's request a new access token and repeat the attempt to add a new facility.

POST https://localhost:44361/api/facilities HTTP/1.1
Content-Type: application/json
Authorization: Bearer <access token>

{
    "Name": "test","StreetAddress": "test"
}


HTTP/1.1 201 Created
Content-Type: application/json; charset=utf-8
Location: https://localhost:44361/api/facilities/0ad1fe14-107a-4cdf-9cc0-d882174f512a

{
    "Id": "0ad1fe14-107a-4cdf-9cc0-d882174f512a","Name": "test","Description": null,"StreetAddress": "test","Longitude": null,"Latitude": null,"AccommodationCount": 0
}

Sweet! But how did it work? If we look at the new access token we will find out that a claim of type roles has been added by Azure AD.

{"aud": "https://devunleashed.onmicrosoft.com/book-fast-api","iss": "https://sts.windows.net/70005c1f-ea47-488e-8f57-c3543485f1d0/","iat": 1461924520,"nbf": 1461924520,"exp": 1461928420,"appid": "119f1731-3fd4-4c3d-acbc-2455879b0d54","appidacr": "1","idp": "https://sts.windows.net/70005c1f-ea47-488e-8f57-c3543485f1d0/","oid": "970c6d5c-e200-481c-a134-6d0287f3c406","roles": ["ImporterProcess"],"sub": "970c6d5c-e200-481c-a134-6d0287f3c406","tid": "70005c1f-ea47-488e-8f57-c3543485f1d0","ver": "1.0"
}

The claims contains an array of roles that apply to the context in which the access token is requested. That is, if we request a token as an application we only get roles that have been assigned to the client application and if we request a token using a delegated flow we will get roles that have been assigned to a user that the client app acts on behalf of.

As you know, we use Microsoft.AspNet.Authentication.JwtBearer package on the API side to handle the token and it ultimately relies on System.IdentityModel.Tokens.Jwt package to actually parse the token's payload. It creates an internal representation of the token (JwtSecurityToken) with claims that have their types defined by mapping shortened versions of claim types found in the token to claim types defined in the familiar System.Security.Claims.ClaimTypes class. So the roles claim gets mapped to http://schemas.microsoft.com/ws/2008/06/identity/claims/role.

When the ClaimsIdentity is being initialized it gets populated with a collection of claims (System.Security.Claims.Claim) and it's also given claim types that should be used to look up 'name' and 'roles' claims. By default, these are 'http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name' and 'http://schemas.microsoft.com/ws/2008/06/identity/claims/role' respectively but it can be overwritten with TokenValidationParameters when configuring the middleware. You want to do that when your STS uses a different convention but with Azure AD you go with defaults.

This all makes it possible for RolesAuthorizationRequirement to rely on the familiar IsInRole method that it calls on the principal when authorizing requests.

Azure AD application model and role assignments

Before we move on I'd like to show you how role assignments are reflected in Azure AD application model. But that requires that you have at least some familiarity with it and have an idea of what a ServicePrincipal is.

So far we've worked with 'applications' on the Azure portal but you should be aware that there are distinct concepts in Azure AD: Applications and ServicePrincipals. Applications can be thought of as blue prints that define your apps, whereas ServicePrincipals are concrete representatives of the applications in particular tenants. When you define an app in a tenant it automatically gets its principal in that tenant. However, one application can have multiple principals defined in different tenants. Think, for exampe, of an app created in tenant A that allows users from other tenants to access it. Apps like this are called multitenant. When a user from tenant B gives her consent to the app from tenant A a ServicePrincipal for the app is created in tenant B. This principal is a concrete representative of the app in tenant B. You can read more on these essential concepts here.

Now before we can look at role assignments we need to find a ServicePrincipal for the BookFast API app. We're going to use a convenient tool called Graph Explorer that allows us to query Graph API of Azure AD.

Once logged in as an administrator of my 'Dev Unleashed' tenant I can run a query like this:

GET https://graph.windows.net/devunleashed.onmicrosoft.com/servicePrincipals?$filter=displayName+eq+'book-fast-api'

{
  "odata.metadata": "https://graph.windows.net/devunleashed.onmicrosoft.com/$metadata#directoryObjects/Microsoft.DirectoryServices.ServicePrincipal","value": [
    {"objectType": "ServicePrincipal","objectId": "f4b5edd0-82f6-4350-b01b-43ecc24f5b4a","appDisplayName": "book-fast-api","appId": "7a65f361-a9e2-4607-9026-c9e97d56aae6","appOwnerTenantId": "70005c1f-ea47-488e-8f57-c3543485f1d0","appRoleAssignmentRequired": false,"appRoles": [
        {"allowedMemberTypes": ["Application"
          ],"description": "Allows applications to access book-fast-api to create/update/delete facilities and accommodations","displayName": "Access book-fast-api as an importer process","id": "17a67f38-b915-40bb-bd09-228a5c8a997e","isEnabled": true,"value": "ImporterProcess"
        }
      ],"displayName": "book-fast-api","servicePrincipalNames": ["https://devunleashed.onmicrosoft.com/book-fast-api","7a65f361-a9e2-4607-9026-c9e97d56aae6"
      ]
    }
  ]
}

I omitted a lot of properties but left some of them so that you can have a sense of what it looks like. You probably have recognized our 'ImporterProcess' role that got inherited from the Application object. It makes perfect sense because the Application may be defined in another tenant and we need to be able to assign roles to apps and users from the consuming tenants thus we need to represent the exposed roles in the principal objects.

Every object in Azure ID is identified by its unique objectId these object ID's can be used to directly access object as well as to glue things together.

When we assigned a role to a consuming app an assignment record was actually associated with the target (BookFast API) app's principal:

GET https://graph.windows.net/devunleashed.onmicrosoft.com/servicePrincipals/f4b5edd0-82f6-4350-b01b-43ecc24f5b4a/appRoleAssignedTo

{
  "odata.metadata": "https://graph.windows.net/devunleashed.onmicrosoft.com/$metadata#directoryObjects/Microsoft.DirectoryServices.AppRoleAssignment","value": [
    {"odata.type": "Microsoft.DirectoryServices.AppRoleAssignment","objectType": "AppRoleAssignment","objectId": "XG0MlwDiHEihNG0Ch_PEBuRqti4hbAVGpecosriLrRY","deletionTimestamp": null,"creationTimestamp": "2016-04-28T18:58:02.4056036Z","id": "17a67f38-b915-40bb-bd09-228a5c8a997e","principalDisplayName": "book-fast-internal","principalId": "970c6d5c-e200-481c-a134-6d0287f3c406","principalType": "ServicePrincipal","resourceDisplayName": "book-fast-api","resourceId": "f4b5edd0-82f6-4350-b01b-43ecc24f5b4a"
    },

    ...

  ]
}

Here we see an assignment of principal 'book-fast-internal' (which represents a client app for the importer process) to a resource 'book-fast-api' (which is the ServicePrincipal of BookFast API as you can tell by its objectId 'f4b5edd0-82f6-4350-b01b-43ecc24f5b4a') in the role of '17a67f38-b915-40bb-bd09-228a5c8a997e'. If you scroll up a bit you will recognize the role's ID as it's the one we used for the 'ImporterProcess'.

Notice the principalType value that indicates that the assignment was done to a ServicePrincipal, that is, to an app, not a user.

User roles in Azure AD

Now let's enable the facility provider flow in BookFast API by defining a role that can be assigned to users and groups:

{"allowedMemberTypes": ["User"
  ],"description": "Allows users to access book-fast-api to create/update/delete facilities and accommodations","displayName": "Access book-fast-api as a facility provider","id": "d525273c-6286-4e59-873b-4b0869f71770","isEnabled": true,"value": "FacilityProvider"
}

We've created a new ID for the role and set allowedMemberTypes to 'User' as opposed to 'Application' that we used previously. When we allow to role to be assigned to 'User' can be assigned to both users and groups.

Note that allowedMemberTypes is actually a collection and we could have reused our previous 'ImporterProcess' role to enable it for users too. However, in BookFast API these are separate roles and thus we reflect that in the AD app.

Once the updated manifest for book-fast-api has been uploaded administrators can start assigning users to it on the 'Users' tab of the API app. When assigning a user the administrator is presented with a dialog to choose a role to assign the user to:

Assigning a user to an app role in Azure AD

Using Graph API we can now see user assignments:

GET https://graph.windows.net/devunleashed.onmicrosoft.com/servicePrincipals/f4b5edd0-82f6-4350-b01b-43ecc24f5b4a/appRoleAssignedTo

{
  "odata.metadata": "https://graph.windows.net/devunleashed.onmicrosoft.com/$metadata#directoryObjects/Microsoft.DirectoryServices.AppRoleAssignment","value": [
    {"odata.type": "Microsoft.DirectoryServices.AppRoleAssignment","objectType": "AppRoleAssignment","objectId": "OD2oPtbadkWXAZ8OFTwytWc12hWKB6NAi9bVHtwDmrw","deletionTimestamp": null,"creationTimestamp": null,"id": "d525273c-6286-4e59-873b-4b0869f71770","principalDisplayName": "New Fella","principalId": "3ea83d38-dad6-4576-9701-9f0e153c32b5","principalType": "User","resourceDisplayName": "book-fast-api","resourceId": "f4b5edd0-82f6-4350-b01b-43ecc24f5b4a"
    },

    ...
  ]
}

Spot the difference? Yes, principalType is now 'User'. Another possible values is 'Group' if roles are assigned to directory groups but it's supported only in the paid version of Azure AD.

One more thing I'd like to mention before I wind up this post. By default, user assignments are not required meaning that any user of the tenant can request access to an app. The roles claim in their tokens will be missing and your API will reject requests with these tokens if your authorization policy requires a certain role to be present in the token. In some scenarios you may want all users to be qualified by roles and enforce user assignments for your apps. There is a special option on the 'Configure' tab to enable mandatory user assignments which is reflected in appRoleAssignmentRequired property of the ServicePrincipal object. Why principal? Because it's a tenant specific setting: some may require it, some may not.

Accessing Azure AD protected resources using OAuth2 Authorization Code Grant

$
0
0

OAuth2 Authorization Code Grant is an interactive authorization flow that enables users to give their consent for client applications to access their resources. It's meant to be used with confidential clients which are the clients that are able to keep their credentials safe. A traditional server-side web application is a confidential client. The flow requires a user agent (a browser or a web view) to handle redirections.

OAuth2 Authorization Code Grant

At the end of the post we're going to look closer at actual messages that are being exchanged between participants but for now I'd like to point out a few important aspects of the flow:

  • Client applications do not gather user credentials. This is great for client applications as they don't have to manage user credentials and keep them safe. They don't have to handle password resets, multi-factor auth and so on. They can rely on identity providers to implement those tasks in compliance with the industry's best practices. And it's great for the users as the flow enables a choice of identity providers which enables reuse of their existing identities.
  • Users provide their explicit consent to access their resources. This is probably the most important part as clients can only access what they were permitted to access and this information is embedded within an access token.
  • The client has to authenticate with its client_Id and secret when redeeming the authorization code. This is what makes this flow somewhat irrelevant for public clients such as mobile apps running on user devices or JavaScript web applications (SPA's) as they can't be considered to be able to reliably store their secrets.

Contrast this flow to the Resource Owner Password Credentials Grant where client applications collect user credentials. In that flow there is no way for users to provide their consent and thus the so-called 'admin' consent is required to enable client applications to access protected resources. In Azure AD admin consent is given when the tenant administrator assigns a delegated permission to a client app. This automatically registers a consent for all users within the tenant. Alternatively, the admin consent can be given in an interactive flow such as the one we are looking at in this post. When calling the authorization endpoint you can append prompt=admin_consent parameter which will require the user to be a tenant administrator and once she's given her consent it will apply to all users in the organization.

There is one more point to mention before we move on. Although the flow implies authentication of a user by an identity provider it's not well suitable as a mechanism that provides the user's identity to the client. The client gets an access token which is completely opaque from the client's perspective and can only be used as is when making requests to a protected resource. Sometimes it gets worked around by exposing some sort of 'who am I' endpoint from the resource or identity provider but that requires explicit coding on the client side to consume that endpoint. And of course implementations vary and OAuth2 does not prescribe anything in this respect. It's the authorization framework that is to be used to authorize clients and this is it's primary intent.

OpenID Connect is another specification that is being widely adopted and is there to address this concern. It's an extension to OAuth2 and you will most likely use it when you need user identity on the client side but in this post we're going to focus on the bare bones OAuth2 flow.

Setting up applications in Azure AD

I'm using BookFast API as the protected API app. I've got a corresponding app in Azure AD that represents it and when I download its manifest there is already one OAuth2 permission that this app exposes:

"oauth2Permissions": [
  {"adminConsentDescription": "Allow the application to access book-fast-api on behalf of the signed-in user.","adminConsentDisplayName": "Access book-fast-api","id": "25f8afcd-0b1a-417d-9d32-c738736c63a0","isEnabled": true,"type": "User","userConsentDescription": "Allow the application to access book-fast-api on your behalf.","userConsentDisplayName": "Access book-fast-api","value": "user_impersonation"
  }
]

It's added by default when you provision an app in Azure AD and of course you are free to add your own permissions that make sense for your app. When the consuming app is going to be configured to access your protected API app you will be able to select just the permissions you want to enable for this particular consuming apps and these will be the permissions that will be presented to a user on the consent page.

I'm going to leave this default single one as is. The value is pretty much arbitrary, and it will be added as part of the scope claim to the access token. This enables a fine grained control on the API side when you can check if, for instance, this particular client has been assigned a particular permission. Also note the type parameter that specifies that the consent can be given by a regular user. Another option is 'Admin' which will require an administrator user.

I've also got a consuming app provisioned in Azure AD. I've generated a client secret for it as it is required as part of the Authorization Code Grant flow. As I want users to be presented a consent page upon their first login I need to assign the user_impersonation to the consuming app under a non-admin user. Otherwise it will be considered as an admin consent and all tenant users will immediately be considered as 'having agreed' and won't be presented a consent page.

In order to give a non-admin user from my tenant access to Azure portal I need to add him as a co-administrator to my subscription. This is somewhat inconvenient but hopefully will go away as the AD team is working on v2 endpoints and the new app registration portal.

Adding a delegated permission in Azure AD as a non-admin user

Configuring OAuth2 in ASP.NET Core client app

OAuth2 is a universal spec that defines a bunch authorization flows for common scenarios but it doesn't prevent implementers from adding their specifics to the flows nor does it specify things like token format and validation, user information endpoints, metadata endpoints and so on and so forth.

There is a pretty much generic package called Microsoft.AspNet.Authentication.OAuth that provides a middleware to handle the Authorization Code Grant. You can find its source code in ASP.NET Security repo and there you will also find packages that target specific identity providers such Facebook or Twitter. These packages actually inherit from components found in Microsoft.AspNet.Authentication.OAuth and implement various details specific to their authorities.

There is no package for Azure AD and in fact it's recommended to take advantage of OpenID Connect and ADAL library instead and I'll write about them later. But in this post I'm staying focused on OAuth2.

Customizing the OAuth2 middleware

If we want to use Microsoft.AspNet.Authentication.OAuth with Azure AD we still need to customize it. If you remember from my previous posts, Azure AD requires you to specify the resource parameter when requesting access tokens. Hence, we need to extend OAuthOptions:

public class AzureADOptions : OAuthOptions
{
    public string Resource { get; set; }
}

And provide a new overload for the method that is responsible for redeeming the code:

internal class AzureADHandler : OAuthHandler<AzureADOptions>
{
    public AzureADHandler(HttpClient backchannel) : base(backchannel)
    {
    }

    protected override async Task<OAuthTokenResponse> ExchangeCodeAsync(string code, string redirectUri)
    {
        var tokenRequestParameters = new Dictionary<string, string>()
        {
            { "client_id", Options.ClientId },
            { "redirect_uri", redirectUri },
            { "client_secret", Options.ClientSecret },
            { "code", code },
            { "grant_type", "authorization_code" },
            { "resource", Options.Resource }
        };

        var requestContent = new FormUrlEncodedContent(tokenRequestParameters);

        var requestMessage = new HttpRequestMessage(HttpMethod.Post, Options.TokenEndpoint);
        requestMessage.Headers.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
        requestMessage.Content = requestContent;
        var response = await Backchannel.SendAsync(requestMessage, Context.RequestAborted);
        response.EnsureSuccessStatusCode();
        var payload = JObject.Parse(await response.Content.ReadAsStringAsync());
        return new OAuthTokenResponse(payload);
    }
}

You can find source code for my test client solution here.

Cookies

In traditional web applications a successful authentication normally results a cookie being added to the response so that subsequent requests wouldn't require the user to go through the authentication process all over again. In our case it's not really authentication from the client app's perspective but rather an authorization to access a protected API. However, we still want to drop a cookie to identify an authorized session. We will also use the cookie as the storage mechanism for access and refresh tokens. Thus, we need another middleware from the same security repo: Microsoft.AspNet.Authentication.Cookies.

Configuring the middleware

We are ready to configure both the cookie and our custom OAuth2 middleware now.

app.UseCookieAuthentication(options => options.AutomaticAuthenticate = true);
app.UseAzureAD(options =>
               {
                   options.AuthenticationScheme = "AzureAD";
                   options.AutomaticChallenge = true;

                   options.SignInScheme = CookieAuthenticationDefaults.AuthenticationScheme;

                   options.AuthorizationEndpoint = authOptions.Value.AuthorizationEndpoint;
                   options.TokenEndpoint = authOptions.Value.TokenEndpoint;
                   options.ClientId = authOptions.Value.ClientId;
                   options.ClientSecret = authOptions.Value.ClientSecret;
                   options.CallbackPath = new Microsoft.AspNet.Http.PathString("/oauth");

                   options.Resource = authOptions.Value.Resource;

                   options.SaveTokensAsClaims = true;
               });

Again, if anything looks unclear you can always check out the whole source code here.

AutomaticAuthenticate option makes the cookie middleware initialize the ClaimsPrinicipal when it finds a valid cookie in the request.

Any middleware that's capable of performing any kind of authentication should be identified by a scheme and it can be selected later by that scheme. Remember that we have inherited from a generic Microsoft.AspNet.Authentication.OAuth and we need to provide some scheme name for it, e.g. 'Azure AD'. Also notice the SignInScheme parameter. When we obtain an access token and create a ClaimPrinicipal we want to actually perform a sign-in and we select the cookies middleware by its scheme to do the job. The cookies middleware will serialize all claims from the principal and put that in the cookie payload that it's going to add to the response. And because we set SaveTokensAsClaims to true our access and refresh claims are going to end up in the cookie payload as well. It will increase the cookie size of course but it's the simplest way to implement tokens persistence.

There are a bunch of Azure AD specific settings. You can obtain AuthorizationEndpoint and TokenEndpoint addresses for your tenant from the portal. ClientId and ClientSecret of your client app are self-explanatory. The CallbackPath is the relative address that Azure AD will post the authorization code to. We don't have to provide an existing route for it, the middleware will take care of handling it. However we do have to properly configure the reply URL for the client app in Azure AD, e.g.:

Reply URL option for the client app

resource parameter is the App ID of BookFast API, e.g. 'https://devunleashed.onmicrosoft.com/book-fast-api'.

AutomaticChallenge option will make the OAuth middleware kick in when 401 is flowing back from MVC and start the flow. If you don't enable automatic challenge you're going to have to initiate explicitly through the AuthenticationManager selecting the desired authentication scheme. This is what you normally do when your application provides multiple authentication options to the users.

Handling sign-out

We also want to be able to clear that auth cookie. In other words, implement a 'sign-out' experience. We can add a simple controller to do that:

public class AuthController : Controller
{
    public async Task<IActionResult> SignOut()
    {
        await HttpContext.Authentication.SignOutAsync(CookieAuthenticationDefaults.AuthenticationScheme);
        return RedirectToAction("Index", "Home");
    }
}

Calling the protected API

I've added a simple action to query pending booking requests of the current user:

[Authorize]
public async Task<IActionResult> Bookings()
{
    ViewBag.Title = "Hello, stranger!";
    var bookings = await proxy.LoadBookingsAsync(User.FindFirst("access_token").Value);

    return View(bookings);
}

The access token is retrieved from the current principal's claim. There is also a refresh token and you should take care of implementing proper refresh logic in your apps. One great option will be taking advantage of ADAL library to manage tokens for you then you have to think about tokens persistence as the library be default stores them in memory. This is a great topic to explore but it's slightly out of this post's scope.

The proxy is basically a simple adapter over HttpClient and you can check it out on GitHub if you like.

And notice the 'Hello, stranger!' title value that I'm passing to the view. This is to emphasize the point that even though the user authenticates against Azure AD and authorizes the client to call the API on her behalf, the client itself has no idea who the user is unless you implement a way for the client to find it out but it has nothing to do with OAuth.

Handling requests in Azure AD protected API

I've actually covered what's happening on the API side in details in one of my previous posts so I'll just point you to it. I've described the receiving middleware, token format and validation and I also showed a couple of other OAuth2 flows to obtain access tokens.

Testing out the flow

It all starts with navigating to the protected Bookings action:

GET https://localhost:44378/Home/Bookings HTTP/1.1


HTTP/1.1 302 Found
Content-Length: 0
Location: https://login.microsoftonline.com/70005c1f-ea47-488e-8f57-c3543485f1d0/oauth2/authorize?client_id=8de1855f-f70d-4e3d-a05b-e2490cdee480&scope=&response_type=code&redirect_uri=https%3A%2F%2Flocalhost%3A44378%2Foauth&state=CfDJ8FBErJFK6EtMkyn6lrdZyOoGGm-1uAX89fdLJW80PGnPG2p7RyuYyETsMGN8XkGPsAhpBCIfBRCadCKz0JDX0mm7vMvk0oKXNi7PaeVxzwwn6AfRVXajCEPts8kUGCO0dOJSw5zZVBJvakHFCs3iEumMcwK0pq34iBTiugzBSIbuxM1sxAJeBXvO7jmE4PVIeAsxECKgDYK_CJkDULTyT6WIHl1J9ZLoZOjPr4a6YY0HXoxjvbpA7mz7_jMEv68VfwpleEHusVh-xQVnoi3Nz9pyjfWsj-9c9cfx-erpPiIx
Server: Kestrel
Set-Cookie: .AspNet.Correlation.AzureAD=crEHiJ3ZBw7x-oqul30R00lPCf6d50OfocRK9Xip5Fo; path=/; secure; httponly

The automatic challenge kicked in and redirected the user agent to the Azure AD authorization endpoint. We see the expected parameters such as client_id, response_type=code and redirect_uri. Notice the state parameter that is encrypted by the middleware and .AspNet.Correlation.AzureAD cookie. This is the way the middleware implements CSRF protection. It is important to insure that the authorization code that will be delivered back to us has been actually retrieved as a result of an authorization action triggered by a legitimate user in our client application. The authority (Azure AD in our case) is required to return the same value of the state parameter that it has been given and the value of the .AspNet.Correlation.AzureAD cookie is included as part of the state. The middleware will compare both values before redeeming the code.

Once redirected to Azure AD the user experience depends on various factors: whether she's already signed in with Azure AD, whether she had already provided her consent before or the administrator had provided consent for the whole tenant.

If none of the above is true the user will be asked for her credentials and consent:

Azure AD consent page

You recognize 'Access book-fast-api' permission that we added to the API app and assigned to the client app. There is also 'Sign you in and read your profile' permission that is coming from 'Azure Active Directory' app and is added automatically to every application that you provision in Azure AD. This permission gives a read access to a user profile to your client app and is required for any app that needs to authenticate users.

Once user authentication has been successful and the consent has been received Azure AD redirects the user agent to redirect_uri together with an authorization code:

HTTP/1.1 302 Found
Location: https://localhost:44378/oauth?code=<authorization code value>&state=CfDJ8FBErJFK6EtMkyn6lrdZyOoGGm-1uAX89fdLJW80PGnPG2p7RyuYyETsMGN8XkGPsAhpBCIfBRCadCKz0JDX0mm7vMvk0oKXNi7PaeVxzwwn6AfRVXajCEPts8kUGCO0dOJSw5zZVBJvakHFCs3iEumMcwK0pq34iBTiugzBSIbuxM1sxAJeBXvO7jmE4PVIeAsxECKgDYK_CJkDULTyT6WIHl1J9ZLoZOjPr4a6YY0HXoxjvbpA7mz7_jMEv68VfwpleEHusVh-xQVnoi3Nz9pyjfWsj-9c9cfx-erpPiIx&session_state=ee505bf3-c0b5-43ea-80c9-f9110d3993a1

The middleware handles requests to '/oauth' route as we have configured it earlier:

GET https://localhost:44378/oauth?code=<authorization code value>&state=CfDJ8FBErJFK6EtMkyn6lrdZyOoGGm-1uAX89fdLJW80PGnPG2p7RyuYyETsMGN8XkGPsAhpBCIfBRCadCKz0JDX0mm7vMvk0oKXNi7PaeVxzwwn6AfRVXajCEPts8kUGCO0dOJSw5zZVBJvakHFCs3iEumMcwK0pq34iBTiugzBSIbuxM1sxAJeBXvO7jmE4PVIeAsxECKgDYK_CJkDULTyT6WIHl1J9ZLoZOjPr4a6YY0HXoxjvbpA7mz7_jMEv68VfwpleEHusVh-xQVnoi3Nz9pyjfWsj-9c9cfx-erpPiIx&session_state=ee505bf3-c0b5-43ea-80c9-f9110d3993a1 HTTP/1.1
Cookie: .AspNet.Correlation.AzureAD=crEHiJ3ZBw7x-oqul30R00lPCf6d50OfocRK9Xip5Fo


HTTP/1.1 302 Found
Cache-Control: no-cache
Pragma: no-cache
Content-Length: 0
Expires: -1
Location: https://localhost:44378/Home/Bookings
Server: Kestrel
Set-Cookie: .AspNet.Correlation.AzureAD=; expires=Thu, 01 Jan 1970 00:00:00 GMT; path=/
Set-Cookie: .AspNet.Cookies=<auth cookie value>; path=/; secure; httponly

Wait, something has to be missing here! Before issuing that redirect response to 'https://localhost:44378/Home/Bookings' the middleware validated the state and redeemed the authorization code:

POST https://login.microsoftonline.com/70005c1f-ea47-488e-8f57-c3543485f1d0/oauth2/token HTTP/1.1
Accept: application/json
User-Agent: Microsoft ASP.NET OAuth middleware

client_id=8de1855f-f70d-4e3d-a05b-e2490cdee480&redirect_uri=https%3A%2F%2Flocalhost%3A44378%2Foauth&client_secret=<client secret>&code=<authorization code value>&grant_type=authorization_code&resource=https%3A%2F%2Fdevunleashed.onmicrosoft.com%2Fbook-fast-api


HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Content-Length: 2932

{
	"token_type": "Bearer","scope": "user_impersonation","expires_in": "3599","expires_on": "1463157915","not_before": "1463154015","resource": "https://devunleashed.onmicrosoft.com/book-fast-api","access_token": "<access token value>","refresh_token": "<refresh token value>","id_token": "<id token value>"
}

Obtained tokens have been added as claims to the ClaimsPrincipal and serialized into '.AspNet.Cookies' cookie that was set up in the final redirect to 'https://localhost:44378/Home/Bookings'.

What's the id_token that we see in the response from the token endpoint? Azure AD tries to make a client app developer's life easier and includes an OpenID Connect ID token in the response. ID tokens are meant to be consumed (i.e. parsed) by clients to obtain identity information of their users. Should we use it here? Well, it's not part of OAuth2 spec and thus we probably can't rely on this behavior of Azure AD to be preserved in the future. Besides, if we want ID tokens we should start talking OpenID Connect to Azure AD in the first place!

Accessing Azure AD protected resources using OpenID Connect

$
0
0

Last time we had a look at the canonical OAuth2 Authorization Grant and tested it with ASP.NET Cored based API and web applications. We had identified key characteristics of the flow and emphasized authorization nature of it and the OAuth2 protocol in general. This time let's have a look at the user identity side of the story and the OpenID Connect protocol that reveals the identity to client applications.

Pretty much everything related to setting applications up in Azure Active Directory that I described in the earlier post applies here as well so I am not going to repeat it. Configuring the API application middleware to handle JWT tokens stays the same too so in this post we're mostly going to focus on the client application.

OpenID Connect

But first let's have a quick introduction of OpenID Connect and the scenarios it supports. As defined in the specification:

OpenID Connect 1.0 is a simple identity layer on top of the OAuth 2.0 protocol. It enables Clients to verify the identity of the End-User based on the authentication performed by an Authorization Server, as well as to obtain basic profile information about the End-User in an interoperable and REST-like manner.

It brings an additional artifact to the game called 'ID token' that is client side parsable and verifiable and contains information about the user identity. There are 3 flows in the protocol that makes it applicable in various scenarios:

Authorization Code flow

This flow is very similar to the OAuth2 Authorization Code Grant and we get the ID token when we redeem the authorization code against the token endpoint.

OpenID Connect Authorization Code flow

We identify the flow we want with the response_type parameter in the request to the authorization endpoint. For the current flow we just want to get a code back so we set it to code and get the ID token later from the token endpoint. This is somewhat more secure as the 'ID token' is retrieved as a result of the server to server call. Also note the scope parameter that must include openid value to indicate to the authorization server that we are talking OpenID Connect.

Normally there will also be state and nonce parameters that are used to prevent forgery. state can be used together with browser cookies to prevent CSRF and in fact state is also recommended in vanilla OAuth2. nonce is generated by the client and ties the authentication session with the issued ID token as the authorization server will include in the token and the client can verify it by comparing the value from the token with a values it stored somewhere, e.g. a cookie.

Even though the response from the authorization framework is presented as a 302 redirect on the diagram it really depends on yet another parameter called response_mode. There are a couple of additional specifications (this and this) that define 3 possible values in total: query, fragment and form_post. The last one is interesting and is the default used by the OpenID Connect middleware for ASP.NET Core that we're going to configure in a moment. Instead of redirecting back to the client app, it makes Azure AD return a form containing the requested artifacts (in our case code) that auto-posts itself to the client app with a little help of some embedded JavaScript. On the diagram we can see the case when query was used as the response mode and this may be somewhat less optimal as the artifacts may get saved in the browser history for a while.

Implicit flow

This flow is pretty close to the OAuth2 Implicit Grant and is to be used by non-confidential clients such JavaScript or native applications. The idea is that access and ID tokens are returned directly from the authorization endpoint and clients are not authenticated.

OpenID Connect Implicit flow

In OpenID Connect the response_type should be set to either id_token or id_token token to enable the flow. If we don't need to call other services and we just want to perform a federated authentication we can only request 'id_token' from the endpoint.

Tokens are returned in the URI fragment (notice the # sign) and thus remain seen on the client side only. User agent performs the requested redirect and the client app returns a page with embedded JavaScript that is able to retrieve the tokens from the fragment.

Hybrid flow

As the name implies this flow allows us to decide when we want to return any of the artifacts. The acceptable values for the response_type parameter are code id_token, code token or code id_token token.

OpenID Connect Hybrid flow

Sometimes we want to get the ID token earlier with a response from the authorization endpoint to be able to set up a security context in our client applications before we call the token endpoint.

UserInfo and metadata endpoints

There are a couple of more things I'd like to mention before I wrap up this quick introduction to OpenID Connect. While ID token provides minimal viable information about the user identity to the client application, the authority also exposes a UserInfo endpoint (/userinfo) that we can use to obtain additinal claims about the user.

Also, part of the OpenID Connect discovery the authority implements a metadata endpoint that relying parties can use to get the necessary metadata to validate tokens (both ID and access ones). The metadata endpoint is normally exposed at the following address: <authority URL>/.well-known/openid-configuration and contains, among others, URLS of authorization and token endpoints, issuer, UserInfo endpoint and JSON Web Key Set that contains public keys we should use to verify the signature of JWT tokens (both ID and access ones).

Configure OpenID Connect middleware in ASP.NET Core

Now that we have a general understanding of the protocol let's see how we can configure the OpenID Connect middleware in an ASP.NET Core web application to work with Azure Active Directory.

We actually need to add a couple of middleware to the pipeline:

"dependencies": {"Microsoft.AspNetCore.Authentication.Cookies": "1.0.0-rc2-final","Microsoft.AspNetCore.Authentication.OpenIdConnect": "1.0.0-rc2-final","Microsoft.IdentityModel.Clients.ActiveDirectory": "3.10.305231913"
}

We need the cookies middleware too as normally in web applications a successful authentication results in a cookie being added to the response so that subsequent requests wouldn't require the user to go through the authentication process all over again.

Active Directory Authentication Library (ADAL)

The OpenID Connect middleware is not Azure AD specific and can work with just about any identity provider that implements the protocol. Now the problem is that Azure AD has its own dialect as it requires a resource parameter being added to requests to its token endpoints. And while the OpenID Connect middleware is able to redeem the authorization code on its own it won't work because it won't add this parameter to the request. Note that I'm talking about v1 endpoints of Azure AD and things are different in v2 which are currently in preview.

Besides support for the Azure AD dialect we should also take care about persistence of the tokens and handling token refresh. In my previous post I used cookies to store tokens which has its pros and cons but when combined with the task of handling token refresh server side persistence starts looking as a preferable option.

We are going to use the Active Directory Authentication Library (aka ADAL) to help us with all of these issues: dialect, token persistence and refresh. There are versions of the library for various platforms and languages and its source is open and available at GitHub.

ADAL provides an in memory token cache however it has extensibility points that allow us to persist the serialized cache to and load it from the storage of our choice as needed. You can read more about the cache and its model on Vittorio Bertocci's blog.

What about refreshing tokens? ADAL handles it too. In fact, v3 of the library stopped exposing refresh tokens from the AuthenticationResult object that we get after calling token endpoints. When you request an access token from ADAL (the cache to be exact) and it finds out that the token has already expired or about to get expired and there is a valid refresh token in the cache, ADAL will issue a request to the token endpoint with the refresh token, put the new tokens in the cache and notify you to persist the updated cache. Of course it will return the newly obtained access token for you to use.

This is really handy and it frees us from writing this somewhat tedious infrastructural logic.

Back to configuring the middleware

public void Configure(IApplicationBuilder app,
    IHostingEnvironment env,
    ILoggerFactory loggerFactory,
    IOptions<Infrastructure.Authentication.AuthenticationOptions> authOptions)
{
    app.UseCookieAuthentication(new CookieAuthenticationOptions
    {
        AutomaticAuthenticate = true
    });

    app.UseOpenIdConnectAuthentication(new OpenIdConnectOptions
    {
        AutomaticChallenge = true,

        Authority = authOptions.Authority,
        ClientId = authOptions.ClientId,
        ClientSecret = authOptions.ClientSecret,

        ResponseType = OpenIdConnectResponseTypes.CodeIdToken,

        SignInScheme = CookieAuthenticationDefaults.AuthenticationScheme,
        PostLogoutRedirectUri = authOptions.PostLogoutRedirectUri,

        Events = CreateOpenIdConnectEventHandlers(authOptions)
    });
}

private static IOpenIdConnectEvents CreateOpenIdConnectEventHandlers(AuthenticationOptions authOptions)
{
    return new OpenIdConnectEvents
    {
        OnAuthorizationCodeReceived = async context =>
        {
            var clientCredential = new ClientCredential(authOptions.ClientId, authOptions.ClientSecret);
            var authenticationContext = new AuthenticationContext(authOptions.Authority);
            await authenticationContext.AcquireTokenByAuthorizationCodeAsync(context.TokenEndpointRequest.Code,
                new Uri(context.TokenEndpointRequest.RedirectUri, UriKind.RelativeOrAbsolute),
                    clientCredential, authOptions.ApiResource);

            context.HandleCodeRedemption();
        }
    };
}

I can't help but directing you again to my post on configuring the OAuth2 middleware for the Authorization Code Grant because most of the options have already been explained there.

You've probably noticed that we set response_type to code id_token which effectively enables the hybrid flow and we get the ID token when we get the control back from the authorization endpoint.

And we also intercept the OnAuthorizationCodeReceived event to take adavantage of ADAL to redeem the code and cache tokens. We make sure to notify the OpenID Connect middleware by calling context.HandleCodeRedemption() that we've handled this part and it doesn't need to try to redeem the code on its own.

Requesting an access token from ADAL

When we need to make a call to a protected resource we should first get the access token from ADAL and then add to our request to the resource. Here's what it would normally look like in a web application:

public static async Task<string> AcquireAccessTokenAsync(AuthenticationOptions authOptions)
{
    var clientCredential = new ClientCredential(authOptions.ClientId, authOptions.ClientSecret);
    var authenticationContext = new AuthenticationContext(authOptions.Authority);

    try
    {
        var authenticationResult = await authenticationContext.AcquireTokenSilentAsync(authOptions.ApiResource,
            clientCredential, UserIdentifier.AnyUser);

        return authenticationResult.AccessToken;
    }
    catch (AdalSilentTokenAcquisitionException)
    {
        // TODO: log it or do whatever makes sence for your app
        return null;
    }
}

That AcquireTokenSilentAsync is where all the ADAL magic happens. And it will also take advantage of the multi-resource nature of Azure AD refresh tokens should we request an access token for a resource that is different from the one we initially used in OnAuthorizationCodeReceived event handler (if the token for that resource hasn't been cached yet).

June, 2016 meet-up of Belarus Azure User Group

$
0
0

Last week on June'21 we've had another meet-up event of our local Azure User Group where people interested in learning how Azure can help them build great solutions of tomorrow share their experience and adventures. This time around we talked about Azure Active Directory and efficient file storage.

Belarus Azure User Group Meet-up June, 2016

I labeled my talk "Azure AD for developers" as I was trying to demonstrate how relatively easy it is for developers to take advantage of Azure AD when implementing security requirements that are common in today's applications. I used my playground app BookFast that I've already mentioned in various posts. We've had a look at federated sign-in both for corporate users through v1 endpoints, then we had a glance at customer sign-in using v2 endpoints of Azure B2C and we also learned how we can delegate access to remote services. By building on top of the robust, secure and scalable identity infrastructure developers have more time to focus on delivering astonishing user experience and features in their applications and at the same time have a good sleep knowing that hundreds of security experts at Microsoft are taking care of their identity story. This is what 'standing on the shoulders of giants', as Scott Hanselman often puts it, really is and this is what keeps me thrilled as I look forward.

Dzmitry Durasau shared some unique experience of his and his team building a reliable and efficient file storage solutions on Azure. He reviewed several options such as File Storage, StorSimple and home-grown IAAS-based solutions. All have their usage scenarios and little pitfalls that you only get to know once you have spent some time with them and picked up some bruises. What I personally took away from Dzmitry's talk is that technology and architecture that seem to work well on premises may fall short in the public cloud and it once again confirmed that migration to the cloud isn't just moving old stuff from your basement to a VM. Mindset change, re-design, dropping old patterns and ways of doing things are essential parts of a successful migration. And Azure allows you to build efficient hybrid solutions and keep things where they work best. But I seem to have sailed away from the Dzmitry's talk a bit, lol!

Overall, it was a fantastic event and I'm looking forward to meeting with our community again!


Setting up your ASP.NET Core apps and services for Azure AD B2C

$
0
0

So far we've been looking at corporate or organizational accounts in context of working with Azure AD. But for customer facing applications it's important to provide a way for users to register themselves and use their existing accounts in various well-known services to authenticate with your applications. Today we're going to look at Azure AD B2C, the service designed specifically to serve individuals consuming your apps, and how to configure it in your ASP.NET Core web applications.

Setting up a directory

You still create B2C directories just like the regular directories on the classic portal.

Creating an Azure AD B2C directory

Even though you have some tabs and you may try to configure from here, chances are it's not going to work. Things are pretty much in flux and rememver Azure AD B2C is itself in preview. So you gotta head to the new portal straight after to manage your newly created directory. Just make sure to select your B2C directory in the upper right corner.

B2C app settings

The apps have pretty standard settings that you've probably got used to: Application (client) ID, secret (key), return URLs, etc. One important thing to note is that if you want to implement a delegated access scenario when you have a client app and some remote services both the app and the services will shared the same application in Azure AD B2C. I'm not sure if it will stay the same but for now this is way the things are.

I'm going to use a sample solution of two ASP.NET Core applications that you can clone from GitHub.

Configuring API application

Azure AD B2C uses v2 endpoints and its tokens are still in JWT format, same as organizational v1 endpoints. That means we need Microsoft.AspNetCore.Authentication.JwtBearer middleware and its configuration looks almost identical to the one we used for the classic directory.

public void Configure(IApplicationBuilder app, IOptions<AuthenticationOptions> authOptions)
{
    app.UseJwtBearerAuthentication(new JwtBearerOptions
    {
        AutomaticAuthenticate = true,
        AutomaticChallenge = true,

        MetadataAddress =
            $"{authOptions.Value.Authority}/.well-known/openid-configuration?p={authOptions.Value.SignInOrSignUpPolicy}",
        Audience = authOptions.Value.Audience,

        Events = new JwtBearerEvents
                 {
                     OnAuthenticationFailed = ctx =>
                                              {
                                                  ctx.SkipToNextMiddleware();
                                                  return Task.FromResult(0);
                                              }
                 }
    });

    app.UseMvc();
}

Except for one thing: instead of setting the authority, which is our tenant's URL in the classic directory, we specify the full URL to the OpenID Connect metadata endpoint. And the reason for that is that query string parameter, called p, that identifies a policy. In Azure AD B2C policies define the end user experience and enable much greater customization options than the ones available in the classic directory. Official documentation covers policies and other concepts in great details so I suggest you have a look at it.

In Azure AD B2C the policy is a required parameter in requests to authorization and token endpoints. For instance, if we query the metadata endpoint we get the following output:

HTTP/1.1 200 OK
Cache-Control: private
Content-Type: application/json; charset=utf-8
Server: Microsoft-IIS/8.5
Set-Cookie: x-ms-cpim-slice=001-000; domain=microsoftonline.com; path=/; secure; HttpOnly
Set-Cookie: x-ms-cpim-trans=; expires=Mon, 04-Jul-2016 20:02:00 GMT; path=/; secure; HttpOnly
Strict-Transport-Security: max-age=31536000; includeSubDomains
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
Set-Cookie: x-ms-gateway-slice=001-000; path=/; secure; HttpOnly
Set-Cookie: stsservicecookie=cpim_te; path=/; secure; HttpOnly
X-Powered-By: ASP.NET
Date: Tue, 05 Jul 2016 20:01:58 GMT
Content-Length: 1208

{
  "issuer": "https://login.microsoftonline.com/bc2fb659-725b-48d8-b571-7420094e41cc/v2.0/","authorization_endpoint": "https://login.microsoftonline.com/devunleashedb2c.onmicrosoft.com/oauth2/v2.0/authorize?p=b2c_1_testsignupandsigninpolicy","token_endpoint": "https://login.microsoftonline.com/devunleashedb2c.onmicrosoft.com/oauth2/v2.0/token?p=b2c_1_testsignupandsigninpolicy","end_session_endpoint": "https://login.microsoftonline.com/devunleashedb2c.onmicrosoft.com/oauth2/v2.0/logout?p=b2c_1_testsignupandsigninpolicy","jwks_uri": "https://login.microsoftonline.com/devunleashedb2c.onmicrosoft.com/discovery/v2.0/keys?p=b2c_1_testsignupandsigninpolicy","response_modes_supported": ["query","fragment","form_post"
  ],"response_types_supported": ["code","id_token","code id_token"
  ],"scopes_supported": ["openid"
  ],"subject_types_supported": ["pairwise"
  ],"id_token_signing_alg_values_supported": ["RS256"
  ],"token_endpoint_auth_methods_supported": ["client_secret_post"
  ],"claims_supported": ["oid","newUser","idp","emails","name","sub"
  ]
}

Not only does it provide policy specific endpoints, it also gives information about claims that I configured to be inluded in tokens for this specific policy.

Configuring a web client application

In ASP.NET Core web client we use the same pair of cookies and OpenID Connect middleware that we used before and we are also going take advantage of ADAL library to help us with token management. As you probably rememeber one of the reasons we use ADAL is to support Azure AD specific requirements. In case of the class directory this is the resource parameter however it's not relevant anymore in Azure AD B2C. Here we have another required parameter: p and thus we need a special version of ADAL that supports it.

Another special case with Azure AD B2C is that its token endpoints do not issue access tokens. Yup, they will give ID tokens with optional refresh tokens and you're supposed to use ID tokens as access tokens when calling your API apps. The standard OpenID Connect middleware cannot be used to redeem the authorization code as it will fail to find access token in the response.

"dependencies": {"Microsoft.AspNetCore.Authentication.Cookies": "1.0.0","Microsoft.AspNetCore.Authentication.OpenIdConnect": "1.0.0","Microsoft.Experimental.IdentityModel.Clients.ActiveDirectory": "4.0.209160138-alpha"
}

It's called 'experimental' and Azure AD team likely is going to switch or focus its efforts on the new libaray called MSAL for all v2 endpoints including B2C. So you definitely want to keep an eye on that but meanwhile we're going use the experimental ADAL package.

app.UseCookieAuthentication(new CookieAuthenticationOptions
{
    AutomaticAuthenticate = true
});

var openIdConnectOptions = new OpenIdConnectOptions
{
    AuthenticationScheme = Constants.OpenIdConnectAuthenticationScheme,
    AutomaticChallenge = true,

    Authority = authOptions.Value.Authority,
    ClientId = authOptions.Value.ClientId,
    ClientSecret = authOptions.Value.ClientSecret,
    PostLogoutRedirectUri = authOptions.Value.PostLogoutRedirectUri,

    ConfigurationManager = new PolicyConfigurationManager(authOptions.Value.Authority,
        new[] { b2cPolicies.Value.SignInOrSignUpPolicy, b2cPolicies.Value.EditProfilePolicy }),
    Events = CreateOpenIdConnectEventHandlers(authOptions.Value, b2cPolicies.Value),

    ResponseType = OpenIdConnectResponseType.CodeIdToken,
    TokenValidationParameters = new TokenValidationParameters
    {
        NameClaimType = "name"
    },

    SignInScheme = CookieAuthenticationDefaults.AuthenticationScheme
};

openIdConnectOptions.Scope.Add("offline_access");

If you've been following my posts on working with Azure AD or have been playing with on your own most of the parameters should be familiar to you. I will just describe the settings that are unique to Azure AD B2C. If we want to get refresh tokens we need to add a special scope offline_access and we also need to implement a configuration manager that takes into account policies when making requests to metadata endpoints. Remember that the default behavior is to simply append .well-known/openid-configuration to the authority parameter and it's not enough in this case.

A possible implementation of the PolicyConfigurationManager can be found in official samples and here you can find the one I used in my demo solution.

CreateOpenIdConnectEventHandlers allows us to intercept the flow by subscribing to various events:

private static IOpenIdConnectEvents CreateOpenIdConnectEventHandlers(B2CAuthenticationOptions authOptions, B2CPolicies policies)
{
    return new OpenIdConnectEvents
    {
        OnRedirectToIdentityProvider = context => SetIssuerAddressAsync(context, policies.SignInOrSignUpPolicy),
        OnRedirectToIdentityProviderForSignOut = context => SetIssuerAddressForSignOutAsync(context, policies.SignInOrSignUpPolicy),
        OnAuthorizationCodeReceived = async context =>
        {
          var credential = new ClientCredential(authOptions.ClientId, authOptions.ClientSecret);
          var authenticationContext = new AuthenticationContext(authOptions.Authority);
          var result = await authenticationContext.AcquireTokenByAuthorizationCodeAsync(context.TokenEndpointRequest.Code,
                             new Uri(context.TokenEndpointRequest.RedirectUri, UriKind.RelativeOrAbsolute), credential,
                             new[] { authOptions.ClientId }, context.Ticket.Principal.FindFirst(Constants.AcrClaimType).Value);

          context.HandleCodeRedemption();
        },
        OnAuthenticationFailed = context =>
        {
            context.HandleResponse();
            context.Response.Redirect("/home/error");
            return Task.FromResult(0);
        }
    };
}

private static async Task SetIssuerAddressAsync(RedirectContext context, string defaultPolicy)
{
    var configuration = await GetOpenIdConnectConfigurationAsync(context, defaultPolicy);
    context.ProtocolMessage.IssuerAddress = configuration.AuthorizationEndpoint;
}

private static async Task SetIssuerAddressForSignOutAsync(RedirectContext context, string defaultPolicy)
{
    var configuration = await GetOpenIdConnectConfigurationAsync(context, defaultPolicy);
    context.ProtocolMessage.IssuerAddress = configuration.EndSessionEndpoint;
}

private static async Task<OpenIdConnectConfiguration> GetOpenIdConnectConfigurationAsync(RedirectContext context, string defaultPolicy)
{
    var manager = (PolicyConfigurationManager)context.Options.ConfigurationManager;
    var policy = context.Properties.Items.ContainsKey(Constants.B2CPolicy) ? context.Properties.Items[Constants.B2CPolicy] : defaultPolicy;
    var configuration = await manager.GetConfigurationByPolicyAsync(CancellationToken.None, policy);
    return configuration;
}

As you can see we need to set the correct endpoint addresses including the policy parameter when the user gets redirected to the authorization and sign-out pages. We use our custom PolicyConfigurationManager to determine the correct endpoints based on the Constants.B2CPolicy property that is set by the AccountController in response to appropriate actions: sign in, sign up, edit profile or sign out. Please check out the code to get a better picture of how things work.

OnAuthorizationCodeReceived is where we redeem the authorization code using the experimental version of ADAL. We need a policy parameter and Constants.AcrClaimType corresponds to http://schemas.microsoft.com/claims/authnclassreference claim that is present in ID tokens issued by Azure AD B2C and this claim contains the name of the active policy. Finally, we need to notify the OpenID Connect middleware that we've managed code redemption by calling context.HandleCodeRedemption().

Enabling multitenant support in you Azure AD protected applications

$
0
0

Azure AD is a multitenant directory and it comes as no surprise that it supports scenarios of applications defined in one tenant to be accessible by users from other tenants (directories). In this post we're going to look at how to enable our client and API applications to be multitenant and what common pitfalls or errors you may encounter when doing this. I'm going to keep using my Book Fast and Book Fast API sample ASP.NET Core applications which I've recently updated to support multitenancy.

Enabling multitenant sign-in

One of the key properties you set when configuring OpenID Connect middleware is the Authority which is basically the address to be used to retrieve the necessary metadata about the identity provider. In single tenant applications you set it to something like 'https://login.microsoftonline.com/{tenantId}' where tenantId is either a Guid or a domain identifier of your tenant, e.g. 'https://login.microsoftonline.com/70005c1f-ea47-488e-8f57-c3543485f1d0' or 'https://login.microsoftonline.com/devunleashed.onmicrosoft.com'. When you're dealing with multitenant applications you can't use a specific tenant as an authority and Azure AD provides you with a reserved word that you use when defining the authority: common. Note that it's not a tenant identifier but rather a special endpoint that implements multitenant support.

When a user is redirected to the common authorization endpoint like this:

GET https://login.microsoftonline.com/common/oauth2/authorize?client_id={...}&redirect_uri={...}&response_type=code%20id_token&scope=openid%20profile&response_mode=form_post&nonce={...}&state={...}

Azure AD collects the email address as the login and tries to figure out which tenant should handle the credentials (based on the domain used in the email address). 'newfella@devunleashed.onmicrosoft.com' will be handled by 'Dev Unleashed' tenant and 'testuser@wildmonkeys.onmicrosoft.com' will be directed to 'Wild Monkeys'.

Now, if you try to sign in to an application from 'Dev Unleashed' with a user from 'Wild Monkeys' you're going to get the following error:

AADSTS70001: Application with identifier '48c8741b-fc13-4a02-bb8f-4bf4df1b3c78' was not found in the directory wildmonkeys.onmicrosoft.com

Any application that can sign in users requires access to the user's profile. This delegated permission is added by default to any application you create in Azure AD. Now, of course, an application from tenant A cannot access user profiles from tenant B by default but Azure AD allows you to enable this by setting availableToOtherTenants property on the application's manifest to true (there is also a corresponding setting on the portal).

When the application is configured as multitenant and a user from another tenant tries to sign in she is presented with a consent page and once she has given her consent for the app from another tenant to access whatever the resources it declares it requires, Azure AD takes care of provisioning a ServicePrincipal for the app in the user's tenant and registering the permissions.

Multitenant consent page

ServicePrincipal is a representative of an application in a particular tenant. The application itself can be defined in the same or other tenant but if the consent was given to it to access resources in a particular tenant, Azure AD creates a ServicePrincipal there (if one does not exist yet) and registers the permission(s) given. You can read about Applications and ServicePrincipals here.

Using a tool such as Graph Explorer you can check out the granted permissions in the target tenant:

GET https://graph.windows.net/wildmonkeys.onmicrosoft.com/oauth2PermissionGrants

{
  "odata.metadata": "https://graph.windows.net/wildmonkeys.onmicrosoft.com/$metadata#oauth2PermissionGrants","value": [
    {"clientId": "c4cb11ab-c343-4a39-8d84-b6c600e0a324","consentType": "Principal","expiryTime": "2017-01-22T11:34:21.7387474","objectId": "qxHLxEPDOUqNhLbGAOCjJBmeQVa8Y7ZAr7te6ViVxJn-JPVnmKcOR5SPsH9SXnQv","principalId": "67f524fe-a798-470e-948f-b07f525e742f","resourceId": "56419e19-63bc-40b6-afbb-5ee95895c499","scope": "User.Read","startTime": "0001-01-01T00:00:00"
    },
    ...
  ]
}

Let's decipher this. All objects in Azure AD have their unique object identifiers (objectId) and these are the Guids you see here. What the record above says is that an application's ServicePrincipal c4cb11ab-c343-4a39-8d84-b6c600e0a324 (Book Fast) was given a permission to access a target resource's ServicePrincipal 56419e19-63bc-40b6-afbb-5ee95895c499 (Azure Active Directory) on behalf of user 67f524fe-a798-470e-948f-b07f525e742f. And the scope of this access is limited to the permission called 'User.Read' that is defined by the Azure Active Directory application (in its manifest but it's also copied to its ServicePrinicipal in 'Wild Monkeys' tenant that you can check out at https://graph.windows.net/wildmonkeys.onmicrosoft.com/servicePrincipals/56419e19-63bc-40b6-afbb-5ee95895c499).

Issuer validation

If you try to sign in again you're going to face another error which is coming from the middleware this time:

Microsoft.IdentityModel.Tokens.SecurityTokenInvalidIssuerException: IDX10205: Issuer validation failed. Issuer: 'https://sts.windows.net/70005c1f-ea47-488e-8f57-c3543485f1d0/'. Did not match: validationParameters.ValidIssuer: 'https://sts.windows.net/{tenantid}/' or validationParameters.ValidIssuers: 'null'.

When the OpenID Connect middleware gets returned an ID token it tries to validate it and part of the validation procedure is the verification of the issuer (hey, if we want to trust the token we need to make sure it has been issued by the authority we trust). The middleware requests the necessary metadata from the common authority as we have configured it:

GET https://login.microsoftonline.com/common/.well-known/openid-configuration HTTP/1.1

HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Content-Length: 1134

{
	"authorization_endpoint": "https://login.microsoftonline.com/common/oauth2/authorize","token_endpoint": "https://login.microsoftonline.com/common/oauth2/token","token_endpoint_auth_methods_supported": ["client_secret_post","private_key_jwt"],"jwks_uri": "https://login.microsoftonline.com/common/discovery/keys","response_modes_supported": ["query","fragment","form_post"],"subject_types_supported": ["pairwise"],"id_token_signing_alg_values_supported": ["RS256"],"http_logout_supported": true,"response_types_supported": ["code","id_token","code id_token","token id_token","token"],"scopes_supported": ["openid"],"issuer": "https://sts.windows.net/{tenantid}/","claims_supported": ["sub","iss","aud","exp","iat","auth_time","acr","amr","nonce","email","given_name","family_name","nickname"],"microsoft_multi_refresh_token": true,"check_session_iframe": "https://login.microsoftonline.com/common/oauth2/checksession","end_session_endpoint": "https://login.microsoftonline.com/common/oauth2/logout","userinfo_endpoint": "https://login.microsoftonline.com/common/openid/userinfo","tenant_region_scope": null,"cloud_instance_name": "microsoftonline.com"
}

Boom! Look at the issuer: 'https://sts.windows.net/{tenantid}/'. The common metadata endpoint has no idea about the tenant and the middleware tried to fall back on the pre-configured issuer(s) in validation patameters but could not find any.

There are a few things you can do about it:

  • Specify a set of well known issuers. Set the ValidIssuers property on Microsoft.IdentityModel.Tokens.TokenValidationParameters to an array of issuers you support (use the full URI like 'https://sts.windows.net/c478e084-357e-4b68-9275-6744b7d71d10/').
  • Implement a custom issuer validator and assign it to the IssuerValidator property. This is useful when you can't specify a predefined list of issuers in configuration and need some runtime logic to determine if you trust the issuer presented in the token.
  • Disable issuer validation by setting the ValidateIssuer property to false. I seriously think we should not resort to it unless you have your reasons.

One more thing to note is that signing keys are common for all tenants. Whether you request the metadata from a tenant specific endpoint or the common endpoint you always get the 'https://login.microsoftonline.com/common/discovery/keys' URL to get the keys.

Configuring your API apps to be multitenant

When your multitenant applications rely on your custom API app you need to make sure that API apps are multitenant enabled as well as they need to be able to validate access tokens issued by different tenants. On the portal (or in the app's manifest) it's still the same 'Application is multitenant' property that needs to be set to true.

If you had defined a dependency in your client app to the API app (by enabling certain delegated permissions) previously and now try to sign in with another tenant's user, you may get the following error:

OpenIdConnectProtocolException: Message contains error: 'access_denied', error_description: 'AADSTS50000: There was an error issuing a token. AADSTS65005: The application needs access to a service that your organization Wild Monkeys has not subscribed to. Please contact your Administrator to review the configuration of your service subscriptions.

Azure AD automatically provisions a ServicePrincipal for the client app (given it has been multitenant enabled and the user provided her consent) but it needs a little help to provision the downstream API app(s). In the manifest of the API app you can find a property called knownClientApplications and you should provide a list of clients you support (the property takes an array of client ID's).

Now the consent should processed successfully and the delegated permission should be registered:

GET https://graph.windows.net/wildmonkeys.onmicrosoft.com/oauth2PermissionGrants

{
  "odata.metadata": "https://graph.windows.net/wildmonkeys.onmicrosoft.com/$metadata#oauth2PermissionGrants","value": [
    {"clientId": "c4cb11ab-c343-4a39-8d84-b6c600e0a324","consentType": "Principal","expiryTime": "2017-01-22T11:34:21.7387474","objectId": "qxHLxEPDOUqNhLbGAOCjJBmeQVa8Y7ZAr7te6ViVxJn-JPVnmKcOR5SPsH9SXnQv","principalId": "67f524fe-a798-470e-948f-b07f525e742f","resourceId": "56419e19-63bc-40b6-afbb-5ee95895c499","scope": "User.Read","startTime": "0001-01-01T00:00:00"
    },
    {"clientId": "c4cb11ab-c343-4a39-8d84-b6c600e0a324","consentType": "Principal","expiryTime": "2017-01-22T11:34:21.7387474","objectId": "qxHLxEPDOUqNhLbGAOCjJP-nroHHIWVKj6aM72kNWzj-JPVnmKcOR5SPsH9SXnQv","principalId": "67f524fe-a798-470e-948f-b07f525e742f","resourceId": "81aea7ff-21c7-4a65-8fa6-8cef690d5b38","scope": "user_impersonation","startTime": "0001-01-01T00:00:00"
    },
	...
  ]
}

We've already seen the first permission and now we get the second one saying that the client app (c4cb11ab-c343-4a39-8d84-b6c600e0a324) was given access to the API app (81aea7ff-21c7-4a65-8fa6-8cef690d5b38) on behalf of user 67f524fe-a798-470e-948f-b07f525e742f. The actually permission is 'user_impersonation' as defined in the API's app manifest and if there were more permissions and we assigned them to the client app there would be more oauth2PermissionGrants records.

We haven't changed anything in our middleware configuration yet and mostly likely you're going run into the following error:

Microsoft.IdentityModel.Tokens.SecurityTokenSignatureKeyNotFoundException: IDX10501: Signature validation failed. Unable to match 'kid': 'MnC_VZcATfM5pOYiJHMba9goEKY', token: '{"alg":"RS256","typ":"JWT","x5t":"MnC_VZcATfM5pOYiJHMba9goEKY","kid":"MnC_VZcATfM5pOYiJHMba9goEKY"}.{"aud":"https://devunleashed.onmicrosoft.com/book-fast-api","iss":"https://sts.windows.net/c478e084-357e-4b68-9275-6744b7d71d10/","iat":1469560722,"nbf":1469560722,"exp":1469564622,"acr":"1","amr":["pwd"],"appid":"48c8741b-fc13-4a02-bb8f-4bf4df1b3c78","appidacr":"1","family_name":"Doe","given_name":"John","ipaddr":"178.121.218.148","name":"John Doe","oid":"67f524fe-a798-470e-948f-b07f525e742f","scp":"user_impersonation","sub":"m4pX16RPYFN3kAgWtaEIuVNxL6xb0PZ86twh9sTFJgo","tid":"c478e084-357e-4b68-9275-6744b7d71d10","unique_name":"testuser@wildmonkeys.onmicrosoft.com","upn":"testuser@wildmonkeys.onmicrosoft.com","ver":"1.0"}'

This is weird and I still have no explanation for it because as we saw earlier signing keys are common for all tenants. Now, if you change the authority in your OpenID Connect middleware's configuration in your API app to common it should fix the error. Don't ask.

And, of course, the implications with the issuer validation stand true here as well.

Application roles

If you rely on application roles it's good to know they work fine in multitenant apps. Administrators of the target tenants can assign their users to roles defined in your applications and this information will be available in the issued tokens. For example, here's the 'FacilityProvider' role I have in my Book Fast app:

"appRoles": [
  {"allowedMemberTypes": ["User"
    ],"description": "Allows users to access book-fast to create/update/delete facilities and accommodations","displayName": "Access book-fast as a facility provider","id": "1be7d8b0-d7bf-4fe8-8537-0099f5a896da","isEnabled": true,"value": "FacilityProvider"
  }
]

Now if we check out the role assignments in the 'guest' Wild Monkeys tenant you should see this:

GET https://graph.windows.net/wildmonkeys.onmicrosoft.com/servicePrincipals/c4cb11ab-c343-4a39-8d84-b6c600e0a324/appRoleAssignedTo

{
  "odata.metadata": "https://graph.windows.net/wildmonkeys.onmicrosoft.com/$metadata#directoryObjects/Microsoft.DirectoryServices.AppRoleAssignment","value": [
    {"odata.type": "Microsoft.DirectoryServices.AppRoleAssignment","objectType": "AppRoleAssignment","objectId": "_iT1Z5inDkeUj7B_Ul50LzolkH1mMHpLoyFG_UtLjzg","deletionTimestamp": null,"creationTimestamp": null,"id": "1be7d8b0-d7bf-4fe8-8537-0099f5a896da","principalDisplayName": "John Doe","principalId": "67f524fe-a798-470e-948f-b07f525e742f","principalType": "User","resourceDisplayName": "book-fast","resourceId": "c4cb11ab-c343-4a39-8d84-b6c600e0a324"
    },
	...
  ]
}

c4cb11ab-c343-4a39-8d84-b6c600e0a324 is the ServicePrincipal of the client app in Wild Monkeys realm and 'John Doe' has been assigned a role of 'Facility Provider' (notice the role's ID 1be7d8b0-d7bf-4fe8-8537-0099f5a896da from the app's manifest). If you want to learn more about application roles in Azure AD you suggest you have a look at my post on the topic.

Using the on-behalf-of flow in your ASP.NET Core services protected by Azure AD

$
0
0

We've seen how various OAuth2 flows allow clients to get delegated access to resources on behalf of the users who own the resources. Modern software is built more and more with distributed architecture in mind and service to service communication is a common scenario and when it comes to security we want to know our options.

OAuth2 already describes one flow specifically dedicated to service to service scenarios called Client Credentials Grant that boils down to the following: the client (a calling service) sends its credentials to the token endpoint of the identity providers (authority) and receives a token back that it includes with a call to a target service. Pretty straightforward and there are a lot of uses for it. However, it has one drawback - we lose the security context in which the calling service was invoked originally.

Well, in many cases this may not be an issue at all. For instance, internal tasks processing data, calculating stats, etc. that should not be bound to the security context of a particular user. But there are other tasks that result in data changes triggered by someone's deliberate action or maybe report generation tasks where we often want to apply security constraints to guarantee that the data gets modified or exposed within the allowed policy. In other words, we would like to preserve the security context of the caller who initiated the operation.

This is where the on-behalf-of flow defined by the OAuth2 Token Exchange extensions can be really handy.

On-behalf-Of flow

Service A accepts an access token obtained as a result of some OAuth2 or OpenID Connect dance on the web client and uses it as a user assertion when it makes a call to the authority (in our case Azure AD) to obtain its own access token (*) for the downstream service B. This new access token will carry the same security context as the original one but it will be issued specifically for Service A to call service B.

I've created an ASP.NET Core test solution that reproduces the scenario described on the diagram. Please check it out on your own and I will just highlight the important bits related to the on-behalf-of flow.

Authentication middleware configuration

I won't touch the web client, it uses the OpenID Connect middleware and you can read lots of details about how to configure it for example here. Service A is our focal point today. It has a pretty standard configuration of the JWT bearer middleware:

app.UseJwtBearerAuthentication(new JwtBearerOptions
{
    AutomaticAuthenticate = true,
    AutomaticChallenge = true,

    Authority = authOptions.Value.Authority,
    Audience = authOptions.Value.Audience,

    SaveToken = true,

    Events = new JwtBearerEvents
    {
        OnAuthenticationFailed = ctx =>
        {
            ctx.SkipToNextMiddleware();
            return Task.FromResult(0);
        }
    }
});

The important property that we should pay attention to is SaveToken that allows us to save the original access token in the AuthenticationProperties so we can re-use it later as a user assertion.

The proxy code that calls the downstream Service B relies on ADAL to request a new access token from Azure AD:

public async Task<ClaimSet> GetClaimSetAsync()
{
    var client = new HttpClient { BaseAddress = new Uri(serviceOptions.BaseUrl, UriKind.Absolute) };
    client.DefaultRequestHeaders.Authorization =
        new AuthenticationHeaderValue("Bearer", await GetAccessTokenAsync());

    var payload = await client.GetStringAsync("api/claims");
    return JsonConvert.DeserializeObject<ClaimSet>(payload);
}

private async Task<string> GetAccessTokenAsync()
{
    var credential = new ClientCredential(authOptions.ClientId, authOptions.ClientSecret);
    var authenticationContext = new AuthenticationContext(authOptions.Authority);

    var originalToken = await httpContextAccessor.HttpContext.Authentication.GetTokenAsync("access_token");
    var userName = httpContextAccessor.HttpContext.User.FindFirst(ClaimTypes.Upn)?.Value ??
        httpContextAccessor.HttpContext.User.FindFirst(ClaimTypes.Name)?.Value;

    var userAssertion = new UserAssertion(originalToken,
        "urn:ietf:params:oauth:grant-type:jwt-bearer", userName);

    var result = await authenticationContext.AcquireTokenAsync(serviceOptions.Resource,
        credential, userAssertion);

    return result.AccessToken;
}

Notice the urn:ietf:params:oauth:grant-type:jwt-bearer assertion type and the way we get the original token using the AuthenticationManager. We use IHttpContextAccessor to get access to HttpContext in ASP.NET Core (there is not static Current property anymore) and we access the AuthenticationManager from the context.

In order to be able to inject IHttpContextAccessor make sure to register it with the DI container:

services.AddSingleton<IHttpContextAccessor, HttpContextAccessor>();

Setting delegated permission in Azure AD

The on-behalf-of flow is supported by v1 endpoints in Azure AD at the time of writing. On the classic portal we need to configure the delegated permission both on the web app to access Service A:

Granting web application delegated access to Service A

As well as on Service A to access Service B:

Granting Service A delegated access to Service B

By default all applications in Azure AD has a 'user_impersonation' delegated permission (defined in their manifests) that can be assigned to other applications. You can define your own permission, of course.

Calling the token endpoint

Let's have a closer look at the actual call to the token endpoint.

POST https://login.microsoftonline.com/70005c1f-ea47-488e-8f57-c3543485f1d0/oauth2/token HTTP/1.1
Content-Type: application/x-www-form-urlencoded

resource=https://devunleashed.onmicrosoft.com/TestServiceB
&client_id=b13f8976-d003-4478-b9d2-a9ff0ee8b382&client_secret=<ServiceA client secret>&grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer&assertion=<original access token>&requested_token_use=on_behalf_of&scope=openid

The original access token claims:

{"aud": "https://devunleashed.onmicrosoft.com/TestServiceA","iss": "https://sts.windows.net/70005c1f-ea47-488e-8f57-c3543485f1d0/","iat": 1471948858,"nbf": 1471948858,"exp": 1471952758,"acr": "1","amr": ["pwd"],"appid": "ffb2de30-44ee-4e4b-92a0-9ad0d841c03f","appidacr": "1","e_exp": 10800,"ipaddr": "37.44.92.69","name": "New Fella","oid": "3ea83d38-dad6-4576-9701-9f0e153c32b5","scp": "user_impersonation","sub": "Pb4IS12ipzA4hH7qswpepAQrOTj7CB5BKFoIvejgEmQ","tid": "70005c1f-ea47-488e-8f57-c3543485f1d0","unique_name": "newfella@devunleashed.onmicrosoft.com","upn": "newfella@devunleashed.onmicrosoft.com","ver": "1.0"
}

Notice the value of the aud claim. It indicates the target audience of the original token. appid claim contains the value of the client ID of the web application.

Now here's the response from the token endpoint:

HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8

{
	"token_type": "Bearer","scope": "user_impersonation","expires_in": "3886","ext_expires_in": "11086","expires_on": "1471953058","not_before": "1471948871","resource": "https://devunleashed.onmicrosoft.com/TestServiceB","access_token": "<token value>","refresh_token": "<token value>","id_token": "<token value>"
}

user_impersonation corresponds to the delegated permission that we granted on the portal. If we look inside the new access token:

{"aud": "https://devunleashed.onmicrosoft.com/TestServiceB","iss": "https://sts.windows.net/70005c1f-ea47-488e-8f57-c3543485f1d0/","iat": 1471948871,"nbf": 1471948871,"exp": 1471953058,"acr": "1","amr": ["pwd"],"appid": "b13f8976-d003-4478-b9d2-a9ff0ee8b382","appidacr": "1","e_exp": 11086,"ipaddr": "37.44.92.69","name": "New Fella","oid": "3ea83d38-dad6-4576-9701-9f0e153c32b5","scp": "user_impersonation","sub": "8s5_qJg4r0APO1EdJ3eJlSZkR58qJi-5wv6DMtXs04Y","tid": "70005c1f-ea47-488e-8f57-c3543485f1d0","unique_name": "newfella@devunleashed.onmicrosoft.com","upn": "newfella@devunleashed.onmicrosoft.com","ver": "1.0"
}

We see that the aud and appid claim values have changed. 'b13f8976-d003-4478-b9d2-a9ff0ee8b382' is the client ID of Service A.

What's new in Experimental Tools 0.6

$
0
0

It's been a couple of months since I introduced Experimental Tools extension for Visual Studio 2015 and above. While the 2017 version is being cooked and I've decided to make a quick tour of the features that have been added to it over this time.

First of all, there are a couple of features that help you organize files within your projects:

  • Update file name to match type name (and vice versa)
  • Namespace does not match file path analyzer

The extension provides an analyzer that checks if a top level type name does not match the name of the file where it is declared and displays a warning:

Type and file name analyzer

It also offers to either rename the type to match the file name or rename the file to match the type name.

Type and file name analyzer

Please note that Visual Studio 2017 provides the same code fixes out of the box so they will be disabled when running inside 2017. However, the analyzer will still work and will enable you to quickly locate places where if you have inconsistencies.

By the way, you haven't already, I recommend that you try out the 'Solution Error Visualizer' feature of the Productivity Power Tools extension. With this feature enabled you can quickly glance at and navigate to analysis issues of Error and Warning severity throughout the solution.

Experimental Tools also give an analyzer that checks if a top level namespace matches the path of the file where it is declared and displays a warning if not:

Namespace and file path analyzer

It assumes assembly name as the root namespace as it's currently problematic to get the default namespace from within analyzers. At the moment it's the analyzer only feature but the code fix is definitely on the road map.

Often when you're refactoring and moving code around you find yourself pasting code from existing types into new types. I hope you're going like this little time saver when this code includes a constructor:

Make it a constructor

It actually reacts to the standard CS1520 compiler error that gets registered for all methods that don't have a return type. If there is no constructor with the same set of parameters the extension will offer you to turn the offending method into a constructor.

There is a standard command in Solution Explorer called 'Sync with Active Document'. People coming from ReSharper will appreciate its equivalent:

Locate in Solution Explorer

The command is available in the code editor either from the context menu or as a familiar Shift+Alt+L shortcut.

If you're a fan of xUnit data driven tests this one's going to be a little time saver for you. You can scaffold MemberData:

Scaffold xUnit MemberData

As well as InlineData:

Scaffold xUnit MemberData

If your InlineData contains acceptable parameters they will be respected unless the test method already defines parameters (in which case neither of the scaffolding refactoring will work).

Note that this feature works with xUnit 2.x only.

I totally realize that folks have their own preferences and may not like certain features. That's why all them can be individually turned on or off:

Type and file name analyzer

I guess this is it for now. Download the extension and give it a try, report issues if you find any and if you have ideas you're welcome to contribute (or write your own extension, it's fun, I promise)!

Using code package environment variables in Service Fabric

$
0
0

In my previous post on configuring ASP.NET Core applications in Service Fabric using configuration packages, per environment overrides and a custom configuration provider I gave an example of how you could set a correct web host environment which allows you to adjust configuration and behavior of various components based on the current environment (staging, production, etc).

While everything from that post still stands there is a better way to set host environment as code packages also support environment variables which are set for the host process and which can be overwritten with per-environment values similar to configuration packages.

Service Fabric configuration

So if we consider the host environment example again the first thing that you need to do is to add ASPNETCORE_ENVIRONMENT in your service manifest:

<CodePackage Name="Code" Version="1.0.0">
    <EntryPoint>
      <ExeHost>
        <Program>BookFast.Facility.exe</Program>
        <WorkingFolder>CodePackage</WorkingFolder>
      </ExeHost>
    </EntryPoint>
    <EnvironmentVariables>
      <EnvironmentVariable Name="ASPNETCORE_ENVIRONMENT" Value="" />
    </EnvironmentVariables>
  </CodePackage>

Then make sure to override it in the application manifest:

<ServiceManifestImport>
  <ServiceManifestRef ServiceManifestName="BookFast.FacilityPkg" ServiceManifestVersion="1.0.0" />
  <ConfigOverrides>
    ...
  </ConfigOverrides>
  <EnvironmentOverrides CodePackageRef="Code">
    <EnvironmentVariable Name="ASPNETCORE_ENVIRONMENT" Value="[environment]" />
  </EnvironmentOverrides>
</ServiceManifestImport>

And finally define the environment parameter in the application manifest and provide its values in per-environment settings files.

You don't need to manually extract this setting from the configuration and provide it to WebHostBuilder anymore as it will be extracted from environment variables by the framework.

If you have more environment variables and you want to make them available through the standard configuration infrastructure just make sure you add the configuration provider from Microsoft.Extensions.Configuration.EnvironmentVariables package:

public Startup(StatelessServiceContext serviceContext)
{
    var builder = new ConfigurationBuilder()
        .AddServiceFabricConfiguration(serviceContext)
        .AddEnvironmentVariables();

    Configuration = builder.Build();
}

AddServiceFabricConfiguration is the extension that adds a custom configuration provider that reads from Service Fabric configuration packages as explained in the previous post.

Viewing all 60 articles
Browse latest View live