Quantcast
Channel: WCF Data Services Team Blog
Viewing all 20 articles
Browse latest View live

OData and Authentication – Part 1

$
0
0

Here on the Data Services team we hear many people ask about authentication. Questions like:

  • How do you ‘tunnel’ authentication over the OData protocol?
  • What hooks should I use in the WCF Data Services client and server libraries?

The answer to these questions, depends a lot upon scenario, in fact each authentication scenario presents unique challenges:

  • How does an OData Consumer logon to an OData Producer?
  • How does a WCF Data Service impersonate the OData Consumer so database queries run under context of the consumer?
  • How do you integrate an OData Consumer connecting with an OAuth aware OData Producer?
  • How do you federate a corporate domain with an OData Producer hosted in the cloud, so apps running under a corporate account can access the OData Producer seamlessly?

As you can see lots of questions.

And there is a real risk that people will get their answer wrong.

How we plan to help

So over the next month or so we – the Data Services team - are going to write a series of blog posts detailing our findings as we investigate common OData Authentication scenarios.

It’s hard to know exactly where this series will take us, because that will probably evolve as we explorer the space. We’ll learn as we go – and hopefully you will too – as we document the key distinctions and lessons that we learn along the way.

And then finally when we are done we will publish a whitepaper (or three) summarizing our findings and recommendations.

So stay tuned…

Oh and please let us know if you have any Auth scenarios you want us to explore.

Alex James
Program Manager
Data Services Team
Microsoft.


OData and Authentication – Part 2 – Windows Authentication

$
0
0

Imagine you have an OData Service installed on your domain somewhere, probably using the .NET Data Services producer libraries, and you want to authenticate clients against your corporate active directory.

How do you do this?

On the Serverside

First on the IIS box hosting your Data Service you need to turn on integrated security, and you may want to turn off anonymous access too.

TurnOnIntegratedSecurity

Now all unauthenticated requests to the website hosting your data service will be issued a HTTP 401 Challenge.

For Windows Authentication the 401 response will include these headers:
WWW-Authenticate: NTLM
WWW-Authenticate: Negotiate

The NTLM header means you need to use Windows Authentication.

The Negotiate header means that the client can try to negotiate the use of Kerberos to authenticate. But that is only possible if both the client and server have access to a shared Kerberos Key Distribution Centre (KDC).

If, for whatever reason, a KDC isn’t available a standard NTML handshake will occur.

Detour: Basic Authentication

While we are looking into Windows Authentication it is worth quickly covering basic auth too because the process is very similar.

When you configure IIS to use Basic Auth the 401 will have a different header:
WWW-Authenticate: Basic realm="mydomain.com"

This tells the client to do a Basic Auth handshake to provide credentials for 'mydomain.com'. The ‘handshake’ in Basic Auth is very simple – and very insecure – unless you are also using https.

On the Clientside

Now that a NTLM challenge has been made, what happens next?

In the browser:

Most browsers will present a logon dialog when they receive a 401 challenge. So assuming the user provides valid credentials they are typically free to start browsing the rest of site and by extension the OData service.

Occasionally the browser and the website can "Negotiate" and agree to use kerberos, in which case the authentication can happen automatically without any user input.

The key takeaway though is that in a browser it is pretty easy to authenticate against a Data Service that is secured by Windows Authentication.

.NET Client Application:

In a .NET client application using the standard .NET Data Services client – or for that matter the open source version – you need to tell data services your credentials.

Which you do like this:

MyDataContext ctx = new MyDataContext(uri);
ctx.Credentials = System.Net.CredentialCache.DefaultCredentials; 

The example above makes sense if your client application is running under a windows account that has access to the server. If not however you will have to create a new set of NetworkCredentials and use them instead.

ctx.Credentials = new NetworkCredential(
    "username",
    "password",
    "domain");

As you can see, pretty simple.

Silverlight Client Application:

Silverlight on the other hand is a little different.

If it is running in the browser – the only option in SL2 & 3 - then by default the Data Services client will re-use the cookies and authentication headers already established by the browser.

Silverlight 2 & 3:

In fact in Silverlight 2 & 3 that is all it can do. The Silverlight client library doesn’t have a Credentials property so there is no way to use different credentials.

Typically if your SL app is hosted by a site that requires Windows Authentication, you don’t have a problem – because in order to download the Silverlight app, you need to authenticate in the browser first.

Which means from the perspective of the Data Service you are already authenticated.

Warning: While it is possible in Silverlight to do x-domain calls, so long as the other domain has a correctly configured ClientAccessPolicy.xml file, if the other domain needs you to logon, there is no way to provide your credentials.

Silverlight 4:

Silverlight 4 is significantly more flexible, because it adds a Credentials property to the DataServiceContext. Which you can use to provide a different set of credentials if required.

In fact, if you think about it, because SL4 can run 'out of browser' the ability to set credentials directly is absolutely vital.

Despite this new feature in SL4 there are still some differences between .NET and SL4.

In SL4 there is no CredentialsCache so you can’t re-use the DefaultCredentials from the client. However we added a very handy property instead:

ctx.UseDefaultCredentials = true;

Summary:

As you can see using Windows Authentication with OData is pretty simple, especially if you are using the Data Services libraries.

But even if you can’t, the principles are easy enough, so clients designed for other platforms should be able to authenticate without too much trouble too.

Next time out we’ll cover a more complicated scenario involving OAuth.

OData and Authentication - Part 3 - ClientSide Hooks

$
0
0

So far in this series we’ve looked at Windows Authentication.

For both Windows and Basic Authentication, Data Services does the authentication handshake and subsequent sending of authentication headers – all without you directly setting a http header.

It can do this because there is a higher level abstraction – the Credentials property – that hides the implementation details from you.

All this works great for Windows and Basic Authentication. However if you are using a different authentication scheme, for arguments sake OAuth WRAP, the Credentials property is of no use. You have to get back down to the request and massage the headers directly.

You might need to set the request headers for all sorts of reasons, but probably the most common is Claims Based Authentication.

So before we look into how to set the headers, a little background…

Claims based Authentication 101

The basic idea of Claims Based Auth, is that you authenticate against an Identity Provider and request an ‘access token’ which you can use to make requests against a server of protected resources.

The server – essentially the gatekeeper – looks at the ‘access token’ and verifies it was issued by an Identity Provider it trusts, and if so, allows access to the resources.

Keeping the claim private

Any time you send a token on the wire it should be encrypted, so that it can’t be discovered and re-used by others. This means claims based authentication – at least in the REST world – generally uses HTTPS.

Many Authentication Topologies

While the final step – setting the access token – is very simple, the process of getting an authentication token can get much more complicated, with things like federated trust relationships, delegated access rights etc.

In fact, there are probably hundreds of ‘authentication topologies’ that can leverage claims based authentication. And each topology will involve a different process to acquire a valid ‘access token’.

We’re not quite ready to cover these complexities yet, but we will revisit the specifics in a later post.

Still, at the end of the day, the client application simply needs to pass a valid access token to the server to gain access.

Example Scenario: OAuth WRAP

So if for example you have an OData service that uses OAuth WRAP for authentication the client would need to send a request like this:

GET /OData.svc/Products(1)
Authorization: WRAP access_token=”123456789”

And the server would need to look at the Authorization header and decide if the provided access_token is valid or not.

Sounds simple enough.

The real question for today is how do you set these headers?

Client Side Hooks

Before making requests

On the client-side adding the necessary headers is pretty simple.

Both the Silverlight and the .NET DataServiceContext have an event called SendingRequest that you can hook up to. And in your event handler you can add the appropriate headers.

For OAuth WRAP your event handler might look something like this:

void OnSendingRequest(object sender, SendingRequestEventArgs e)
{
    e.RequestHeaders.Add("Authorization","WRAP access_token=\"123456789\"");
}

What if the Access Token is invalid or missing?

If the Access Token is missing or invalid the server will respond with a 401 unauthorized response, which your code will need to handle.

Unfortunately in the Data Services client today there is no place to hook into the HttpResponse directly, so you have to wait for an Exception.

You will get either a DataServiceQueryException or DataServiceRequestException depending on what you were doing, and both of those have a Response property (which is *not* a HttpResponse) that you can interrogate like this:

try
{
    foreach (Product p in ctx.Products)
        Console.WriteLine(p.Name);
}
catch (DataServiceQueryException ex)
{
    var scheme = ex.Response.Headers["WWW-Authenticate"];
    var code = ex.Response.StatusCode;
    if (code == 401 && scheme == "WRAP")
        DoSomethingToGetAnOAuthAccessToken();
}

The problem with trying to get an Access Token when challenged like this, rather than up front before you make the request, is that you now have to go and acquire a valid token, and somehow maintain context too, so the user isn’t forced to start again.

It is also a little unfortunate that you can’t easily centralize this authentication failure code.

Summary

As you can see it is relatively easy to add custom headers to requests, which is ideal for integrating with various auth schemes on the client side.

It is however harder to look at the response headers. You can do it, but only if you put try / catch blocks into your code, and write code to handle the UI flow interruption.

So our recommendation is that you get the necessary tokens / claims etc – for example an OAuth access_token – before allowing users to even try to interact with the service.

In Part 4 we will look at the Server-Side hooks…

Alex James
Program Manager
Data Service Team

OData and Authentication – Part 4 – Server Side Hooks

$
0
0

If you secure an OData Service using Windows authentication – see Part 2 to learn how – everything works as expected out of the box.

What however if you need a different authentication scheme?

Well the answer as always depends upon your scenario.

Broadly speaking what you need to do depends upon how your Data Service is hosted. You have three options:

  1. Hosted by IIS
  2. Hosted by WCF
  3. Hosted in a custom host

But by far the most common scenario is…

Hosted by IIS

This is what you get when you deploy your WebApplication project – containing a Data Service – to IIS.

At this point you have two realistic options:

  • Create a custom HttpModule.
  • Hook up to the DataServices ProcessingPipeline.

Which is best?

Undoubtedly the ProcessingPipeline option is easier to understand and has less moving parts. Which makes it an ideal solution for simple scenarios.

But the ProcessingPipeline is only an option if it makes sense to allow anonymous access to the rest of website. Which is pretty unlikely unless the web application only exists to host the Data Service.

Using ProcessingPipeline.ProcessingRequest

Nevertheless the ProcessingPipeline approach is informative, and most of the code involved can be reused if you ever need to upgrade to a fully fledged HttpModule.

So how do you use the ProcessingPipeline?

Well the first step is to enable anonymous access to your site in IIS:

IISAuth

Next you hookup to the ProcessingPipeline.ProcessingRequest event:

public class ProductService : DataService<Context>
{
    public ProductService()
    {
        this.ProcessingPipeline.ProcessingRequest += new EventHandler<DataServiceProcessingPipelineEventArgs>(OnRequest);
    }

Then you need some code in the OnRequest event handler to do the authentication:

void OnRequest(object sender,
               DataServiceProcessingPipelineEventArgs e)
{
    if (!CustomAuthenticationProvider.Authenticate(HttpContext.Current))
        throw new DataServiceException(401, "401 Unauthorized");
}

In this code we call into a CustomAuthenticationProvider.Authenticate() method.

If everything is okay – and what that means depends upon the authentication scheme - the request is allowed to continue.

If not we throw a DataServiceException which ends up as a 401 Unauthorized response on the client.

Because we are hosted in IIS our Authenticate() method has access to the current Request via the HttpContext.Current.Request.

My pseudo-code, which assumes some sort of claims based security, looks like this:

public static bool Authenticate(HttpContext context)
{
    if (!context.Request.Headers.AllKeys.Contains("Authorization"))
        return false;

    // Remember claims based security should be only be
    //
used over HTTPS 
   
if (!context.Request.IsSecureConnection)
        return false;

    string authHeader = context.Request.Headers["Authorization"];

    IPrincipal principal = null;
    if (TryGetPrinciple(authHeader, out principal))
    { 
       context.User = principal;
       return true;

    } 
    return false;
 
}

What happens in TryGetPrincipal() is completely dependent upon your auth scheme.

Because this post is about server hooks, not concrete scenarios, our TryGetPrincipal implementation is clearly NOT meant for production (!):

private static bool TryGetPrincipal(
   string authHeader,
   out IPrincipal principal)
{
    //
    // WARNING:
    //
our naive – easily mislead authentication scheme
    // blindly trusts the caller.
    // a header that looks like this:
    // ADMIN username
    // will result in someone being authenticated as an
    // administrator with
an identity of ‘username’
    // i.e. not exactly secure!!!
    //

    var protocolParts = authHeader.Split(' ');
    if (protocolParts.Length != 2)
    {
        principal = null;
        return false;
    }
    else if (protocolParts[0] == "ADMIN")
    {
        principal = new CustomPrincipal(
           protocolParts[1],
           "Administrator", "User"
        );
        return true;
    }
    else if (protocolParts[0] == "USER")
    {
        principal = new CustomPrincipal(
           protocolParts[1],
           "User"
        );
        return true;
    }
    else
    {
        principal = null;
        return false;
    }
}

Don’t worry though as this series progresses we will look at enabling real schemes like Custom Basic Auth, OAuthWrap, OAuth 2.0 and OpenId.

Creating a custom Principal and Identity

Strictly speaking you don’t need to set the Current.User, you could just allow or reject the request. But we want to access the User and their roles (or claims) for authorization purposes, so our TryGetPrincipal code needs an implementation of IPrincipal and IIdentity:

public class CustomPrincipal: IPrincipal
{
    string[] _roles;
    IIdentity _identity;

    public CustomPrincipal(string name, params string[] roles)
    {
        this._roles = roles;
        this._identity = new CustomIdentity(name);
    }

    public IIdentity Identity
    {
        get { return _identity; }
    }

    public bool IsInRole(string role)
    {
        return _roles.Contains(role);
    }
}
public class CustomIdentity: IIdentity
{
    string _name;

    public CustomIdentity(string name)
    {
        this._name = name;
    }

    string IIdentity.AuthenticationType
    {
        get { return "Custom SCHEME"; }
    }

    bool IIdentity.IsAuthenticated
    {
        get { return true; }
    }

    string IIdentity.Name
    {
        get { return _name; }
    }
}

Now my authorization logic only has to worry about authenticated users, and can implement fine grained access control.

For example if only Administrators can see products, we can enforce that in a QueryInterceptor like this:

[QueryInterceptor("Products")]
public Expression<Func<Product, bool>> OnQueryProducts()
{
    var user = HttpContext.Current.User;
    if (user.IsInRole("Administrator"))
        return (Product p) => true;
    else
        return (Product p) => false;
}

Summary

In this post you saw how to add custom authentication logic *inside* the Data Service using the ProcessingPipeline.ProcessRequest event.

Generally though when you want to integrate security across your website and your Data Service, you should put your authentication logic *under* the Data Service, in a HttpModule.

More on that next time…

Alex James
Program Manager
Microsoft

OData and Authentication – Part 5 – Custom HttpModules

$
0
0

In the last post we saw how to add custom authentication inside your Data Service using the ProcessingRequest event.

Unfortunately that approach means authentication is not integrated or shared with the rest of your website.

Which means for all but the simplest scenarios a better approach is needed: HttpModules.

HttpModules can do all sort of things, including Authentication, and have the ability to intercept all requests to the website, essentially sitting under your Data Service.

This means you can remove all authentication logic from your Data Service. And create a HttpModule to protect everything on your website - including your Data Service.

Built-in Authentication Modules:

Thankfully IIS ships with a number of Authentication HttpModules:

  • Windows Authentication
  • Form Authentication
  • Basic Authentication

You just need to enable the correct one and IIS will do the rest.

So by the time your request hits your Data Service the user with be authenticated.

Creating a Custom Authentication Module:

If however you need another authentication scheme you need to create and register a custom HttpModule.

So lets take our – incredibly naive – authentication logic from Part 4 and turn it into a HttpModule.

First we need a class that implements IHttpModule, and hooks up to the AuthenticateRequest event something like this:

public class CustomAuthenticationModule: IHttpModule
{
    public void Init(HttpApplication context)
    {
        context.AuthenticateRequest +=
           new EventHandler(context_AuthenticateRequest);
    }
    void context_AuthenticateRequest(object sender, EventArgs e)
    {
        HttpApplication app = (HttpApplication)sender;
        if (!CustomAuthenticationProvider.Authenticate(app.Context))
        {
            app.Context.Response.Status = "401 Unauthorized";
            app.Context.Response.StatusCode = 401;
            app.Context.Response.End();
        }
    }
    public void Dispose() { }
}

We rely on the CustomAuthenticationProvider.Authenticate(..) method that we wrote in Part 4 to provide the actual authentication logic.

Finally we need to tell IIS to load our HttpModule, by adding this to our web.config:

<system.webServer>
  <modules>
    <add name="CustomAuthenticationModule"
         type="SimpleService.CustomAuthenticationModule"/>
  </modules>
</system.webServer>

Now when we try to access our Data Service - and the rest of the website – it should be protected by our HttpModule. 

NOTE: If it this doesn’t work, you might have IIS 6 or 7 running in classic mode which requires slightly different configuration.

Summary.

In part 2 we looked about using Windows Authentication.
And in parts 3, 4 and 5 we covered all the hooks available to Authentication logic in Data Services, and discovered that pretty much everything you need to do is possible.

Great.

Next we’ll focus on real world scenarios like:

Alex James
Program Manager
Microsoft

OData and Authentication – Part 6 – Custom Basic Authentication

$
0
0

You might remember, from Part 5, that Basic Authentication is built-in to IIS.

So why do we need ‘Custom’ Basic Authentication?

Well if you are happy using windows users and passwords you don’t.

That’s because the built-in Basic Authentication, uses the Basic Authentication protocol, to authenticate against the windows user database.

If however you have a custom user/password database, perhaps it’s part of your application database, then you need ‘Custom’ Basic Authentication.

How does basic auth work?

Basic authentication is a very simple authentication scheme, that should only be used in conjunction with SSL or in scenarios where security isn’t paramount.

If you look at how a basic authentication header is fabricated, you can see why it is NOT secure by itself:

var creds = "user" + ":" + "password";
var bcreds = Encoding.ASCII.GetBytes(creds);
var base64Creds = Convert.ToBase64String(bcreds);
authorizationHeader = "Basic " + base64Creds;

Yes that’s right the username and password are Base64 encoded and shipped on the wire for the whole world to see, unless of course you are also using SSL for transport level security.

Nevertheless many systems use basic authentication. So it’s worth adding to your arsenal.

Server Code:

Creating a Custom Basic Authentication Module:

Creating a Custom Basic Authentication module should be no harder than cracking Basic Auth, i.e. it should be child’s play.

We can use our HttpModule from Part 5 as a starting point:

public class BasicAuthenticationModule: IHttpModule
{
    public void Init(HttpApplication context)
    {
        context.AuthenticateRequest
          
+= new EventHandler(context_AuthenticateRequest);
    }
    void context_AuthenticateRequest(object sender, EventArgs e)
    {
        HttpApplication application = (HttpApplication)sender;
        if (!BasicAuthenticationProvider.Authenticate(application.Context))
        {
            application.Context.Response.Status = "401 Unauthorized";
            application.Context.Response.StatusCode = 401;
            application.Context.Response.AddHeader("WWW-Authenticate", "Basic");
            application.CompleteRequest();
        }
    }
    public void Dispose() { }
}

The only differences from Part 5 are:

  • We’ve changed the name to BasicAuthenticationModule.
  • We use a new BasicAuthenticationProvider to do the authentication.
  • And if the logon fails we challenge using the “WWW-Authenticate” header.

The final step is vital because without this clients that don’t send credentials by default – like HttpWebRequest and by extension DataServiceContext – won’t know to retry with the credentials when their first attempt fails.

Implementing the BasicAuthenticationProvider:

The Authenticate method is unchanged from our example in Part 5:

public static bool Authenticate(HttpContext context)
{
    if (!HttpContext.Current.Request.IsSecureConnection) 
        return false;

    if (!HttpContext.Current.Request.Headers.AllKeys.Contains("Authorization"))
        return false;

    string authHeader = HttpContext.Current.Request.Headers["Authorization"];

    IPrincipal principal;
    if (TryGetPrincipal(authHeader, out principal))
    {
        HttpContext.Current.User = principal;
        return true;
    }
    return false;
}

Our new TryGetPrincipal method looks like this:

private static bool TryGetPrincipal(string authHeader, out IPrincipal principal)
{
    var creds = ParseAuthHeader(authHeader);
    if (creds != null && TryGetPrincipal(creds, out principal))
        return true;

    principal = null;
    return false;
}

As you can see it uses ParseAuthHeader to extract the credentials from the authHeader – so they can be checked against our custom user database in the other TryGetPrincipal overload:

private static string[] ParseAuthHeader(string authHeader)
{
    // Check this is a Basic Auth header
    if (
        authHeader == null ||
        authHeader.Length == 0 ||
        !authHeader.StartsWith("Basic")
    ) return null;

    // Pull out the Credentials with are seperated by ':' and Base64 encoded
    string base64Credentials = authHeader.Substring(6);
    string[] credentials = Encoding.ASCII.GetString(
          Convert.FromBase64String(base64Credentials)
    ).Split(new char[] { ':' });

    if (credentials.Length != 2 ||
        string.IsNullOrEmpty(credentials[0]) ||
        string.IsNullOrEmpty(credentials[0])
    ) return null;

    // Okay this is the credentials
    return credentials;
}

First this code checks that this is indeed a Basic auth header and then attempts to extract the Base64 encoded credentials from the header.

If everything goes according to plan the array returned will have two elements: the username and the password.

Next we check our ‘custom’ user database to see if those credentials are valid.

In this toy example I have it completely hard coded:

private static bool TryGetPrincipal(string[] creds,out IPrincipal principal)
{
    if (creds[0] == "Administrator" && creds[1] == "SecurePassword")
    {
        principal = new GenericPrincipal(
           new GenericIdentity("Administrator"),
           new string[] {"Administrator", "User"}
        );
        return true;
    }
    else if (creds[0] == "JoeBlogs" && creds[1] == "Password")
    {
        principal = new GenericPrincipal(
           new GenericIdentity("JoeBlogs"), 
           new string[] {"User"}
        );
        return true;
    }
    else
    {
        principal = null;
        return false;
    }
}

You’d probably want to check a database somewhere, but as you can see that should be pretty easy, all you need is a replace this method with whatever code you want.

Registering our BasicAuthenticationModule:

Finally you just need to do is add this to your WebConfig:

<system.webServer>
  <modules>
    <add name="BasicAuthenticationModule"
         type="SimpleService.BasicAuthenticationModule"/>
  </modules>
</system.webServer>

Allowing unauthenticated access:

If you want to allow some unauthenticated access to your Data Service, you could change your BasicAuthenticationModule so it doesn’t ‘401’ if the Authenticate() returns false.

Then if certain queries or updates actually require authentication or authentication, you could check HttpContext.Current.Request.IsAuthenticated or HttpContext.Current.Request.User in QueryInterceptors and ChangeInterceptors as necessary.

This approach allows you to mix and match your level of security.

See part 4 for more on QueryInterceptors.

Client Code:

When you try to connect to an OData service protected with Basic Authentication (Custom or built-in) you have two options:

Using the DataServiceContext.Credentials:

You can use a Credentials Cache like this.

var serviceCreds = new NetworkCredential("Administrator", "SecurePassword");
var cache = new CredentialCache();
var serviceUri = new Uri("http://localhost/SimpleService");
cache.Add(serviceUri, "Basic", serviceCreds);
ctx.Credentials = cache;

When you do this the first time Data Services attempts to connect to the Service the credentials aren’t sent – so a 401 is received.

However so long as the service challenges using the "WWW-Authenticate" response header, it will seamlessly retry under the hood.

Using the request headers directly:

Another option is to just create and send the authentication header yourself.

1) Hook up to the DataServiceContext’s SendingRequest Event:

ctx.SendingRequest +=new EventHandler<SendingRequestEventArgs>(OnSendingRequest);

2) Add the Basic Authentication Header to the request:

static void OnSendingRequest(object sender, SendingRequestEventArgs e)
{
  var creds = "user" + ":" + "password";
  var bcreds = Encoding.ASCII.GetBytes(creds);
  var base64Creds = Convert.ToBase64String(bcreds); 
  e.RequestHeader.Add("Authorization", "Basic " + base64Creds);   
}

As you can see this is pretty simple. And has the advantage that it will work even if the server doesn’t respond with a challenge (i.e. WWW-Authenticate header).

Summary:

You now know how to implement Basic Authentication over a custom credentials database and how to interact with a Basic Authentication protected service using the Data Service Client.

Next up we’ll look at Forms Authentication in depth.

Alex James
Program Manager
Microsoft.

OData and Authentication – Part 7 – Forms Authentication

$
0
0

Our goal in this post is to re-use the Forms Authentication already in a website to secure a new Data Service.

To bootstrap this we need a website that uses Forms Auth.

Turns out the MVC Music Store Sample is perfect for our purposes because:

  • It uses Forms Authentication. For example when you purchase an album.
  • It has a Entity Framework model that is clearly separated into two types of entities:
    • Those that anyone should be able to browse (Albums, Artists, Genres).
    • Those that are more sensitive (Orders, OrderDetails, Carts).

The rest of this post assumes you’ve downloaded and installed the MVC Music Store sample.

Enabling Forms Authentication:

The MVC Music Store sample already has Forms Authentication enabled in the web.config like this:

<authentication mode="Forms">
  <forms loginUrl="~/Account/LogOn" timeout="2880" />
</authentication>

With this in place any services we add to this application will also be protected.

Adding a Music Data Service:

If you double click the StoreDB.edmx file inside the Models folder you’ll see something like this:

MvcMusicStoreModel

This is want we want to expose, so the first step is to click ‘Add New Item’ and then select new WCF Data Service:

CreateMusicStoreService

Next modify your MusicStoreService to look like this:

public class MusicStoreService : DataService<MusicStoreEntities>
{
    // This method is called only once to initialize service-wide policies.
    public static void InitializeService(DataServiceConfiguration config)
    {
        config.SetEntitySetAccessRule("Carts", EntitySetRights.None);
        config.SetEntitySetAccessRule("OrderDetails", EntitySetRights.ReadSingle);
        config.SetEntitySetAccessRule("*", EntitySetRights.AllRead);
        config.SetEntitySetPageSize("*", 50);

        config.DataServiceBehavior.MaxProtocolVersion =
             DataServiceProtocolVersion.V2;
    }
}

The PageSize is there to enforce Server Driven Paging, which is an OData best practice, we don’t like to show samples that skip this… :)

Then the three EntitySetAccessRules in turn:

  • Hide the Carts entity set – our service shouldn’t expose it.
  • Allow OrderDetails to be retrieved by key, but not queried arbitrarily.
  • Allow all other sets to be queried by not modified – in this case we want the service to be read-only.

Next we need to secure our ‘sensitive data’, which means making sure only appropriate people can see Orders and OrderDetails, by adding two QueryInterceptors to our MusicStoreService:

[QueryInterceptor("Orders")]
public Expression<Func<Order, bool>> OrdersFilter()
{        
    if (!HttpContext.Current.Request.IsAuthenticated)
        return (Order o) => false;

    var username = HttpContext.Current.User.Identity.Name;
    if (username == "Administrator")
        return (Order o) => true;
    else
        return (Order o) => o.Username == username;
}

[QueryInterceptor("OrderDetails")]
public Expression<Func<OrderDetail, bool>> OrdersFilter()
{
    if (!HttpContext.Current.Request.IsAuthenticated)
        return (OrderDetail od) => false;

    var username = HttpContext.Current.User.Identity.Name;
    if (username == "Administrator")
        return (OrderDetail od) => true;
    else
        return (OrderDetail od) => od.Order.Username == username;
}

These interceptors filter out all Orders and OrderDetails if the request is unauthenticated.

They allow the administrator to see all Orders and OrderDetails, but everyone else can only see Orders / OrderDetails that they created.

That’s it - our service is ready to go.

NOTE: if you have a read-write service and you want to authorize updates you need ChangeInterceptors.

Trying it out in the Browser:

The easiest way to logon is to add something to your cart and buy it:

ShoppingCart

Which prompts you to logon or register:

LogonOrRegister

The first time through you’ll need to register, which will also log you on, and then once you are logged on you’ll need to retry checking out.

This has the added advantage of testing our security. Because at the end of the checkout process you will be logged in as the user you just registered, meaning if you browse to your Data Service’s Orders feed you should see the order you just created:

OrdersAuthenticated

If however you logoff, or restart the browser, and try again you’ll see an empty feed like this:

OrdersUnauthenticated

Perfect. Our query interceptors are working as intended.

This all works because Forms Authentication is essentially just a HttpModule, which sits under our Data Service, that relies on the browser (or client) passing around a cookie once it has logged on.

By the time the request gets to the DataService the HttpContext.Current.Request.User is set.

Which in turn means our query interceptors can enforce our custom Authorization logic.

Enabling Active Clients:

In authentication terms a browser is a passive client, that’s because basically it does what it is told, a server can redirect it to a logon page which can redirect it back again if successful, it can tell it to include a cookie in each request and so on...

Often however it is active clients – things like custom applications and generic data browsers – that want to access the OData Service.

How do they authenticate?

They could mimic the browser, by responding to redirects and programmatically posting the logon form to acquire the cookie. But no wants to re-implement html form handling just to logon.

Thankfully there is a much easier way.

You can enable an standard authentication endpoint, by adding this to your web.config:

<system.web.extensions>
  <scripting>
    <webServices>
      <authenticationService enabled="true" requireSSL="false"/>
    </webServices>
  </scripting>
</system.web.extensions>

The endpoint (Authentication_JSON_AppService.axd) makes it much easier to logon programmatically.

Connecting from an Active Client:

Now that we’ve enabled the authentication endpoint, lets see how we use it. Essentially for forms authentication to work the DataServiceContext must include a valid cookie with every request.

A cookie is just a http header and, as we saw in part 3, it is very easy to add a custom header with every request.

Using Client Application Services:

But before we get down to setting cookies, in some scenarios there is an even easy way: using Client Application Services. These services are not available in the .NET Client Profile (or Silverlight) so you may need to change your Target Framework to use them:

ClientProfile

Once you’ve done that you enable Client Application Services like this:

ClientApplicationServices

NOTE: the Authentication Services Location should be set to the root of the website that has Authentication Services enabled.

Next you add a reference to System.Web to gain access to System.Web.Security.Membership.

Once you’ve done this you simply need to logon once:

System.Web.Security.Membership.ValidateUser("Alex", "password");

This logs on and stores the resulting cookie on the current thread.

Next, assuming you already have a Service Reference to your Data Service – see this to learn how – you can extend your custom DataServiceContext, in our example called MusicStoreEntities, to automatically send the cookie with each request:

public partial class MusicStoreEntities
{
    partial void OnContextCreated()
    {
        this.SendingRequest +=
           new EventHandler<SendingRequestEventArgs>(OnSendingRequest); 
    }
    void OnSendingRequest(object sender, SendingRequestEventArgs e)
    {
       
((HttpWebRequest)e.Request).CookieContainer =
     ((ClientFormsIdentity)Thread.CurrentPrincipal.Identity).AuthenticationCookies;
    }
}

This works by adding the partial OnContextCreated method, which is called in the MusicStoreEntities constructor, and hooking up to the SendingRequest event, to set the cookie for each request.

That’s it, pretty easy.

Manually setting the Cookie:

If however using Client Application Services is not an option – for example you’re in Silverlight or you can only use the Client Profile – you will have to manually get and set the cookie.

To do this change the example above to look like this instead:

public partial class MusicStoreEntities
{
    partial void OnContextCreated()
    {
        this.SendingRequest +=
           new EventHandler<SendingRequestEventArgs>(OnSendingRequest); 
    }
    public void OnSendingRequest(object sender, SendingRequestEventArgs e)
    {
        e.RequestHeaders.Add("Cookie", GetCookie("Alex", "password"));
    }
    string _cookie;
    string GetCookie(string userName, string password)
    {
        if (_cookie == null)
        {
            string loginUri = string.Format("{0}/{1}/{2}",
                "
http://localhost:1397",
                "Authentication_JSON_AppService.axd",
                "Login");
            WebRequest request = HttpWebRequest.Create(loginUri);
            request.Method = "POST";
            request.ContentType = "application/json";

            string authBody = String.Format(
                "{{ \"userName\": \"{0}\", \"password\": \"{1}\", \"createPersistentCookie\":false}}",
                userName,
                password);
            request.ContentLength = authBody.Length;

            StreamWriter w = new StreamWriter(request.GetRequestStream());
            w.Write(authBody);
            w.Close();

            WebResponse res = request.GetResponse();
            if (res.Headers["Set-Cookie"] != null)
            {
                _cookie = res.Headers["Set-Cookie"];
            }
            else
            {
                throw new SecurityException("Invalid username and password");
            }
        }
        return _cookie;
    }
}

This code is admittedly a little more involved. But it you break it down it all makes sense.

The code adds the cookie to the headers whenever a request is issued.

The hardest part is actually acquiring the cookie. The GetCookie() method checks whether we have a cookie, if not it creates a request to the Authentication endpoint, passing the username and password in a JSON body.

If authentication is successful the response will include a ‘Set-Cookie’ header, that contains the cookie.

Summary:

We’ve just walked through using Forms Authentication with an OData service.

That included: integrating security with an existing website, enabling both browser and active clients – based on DataServiceContext – and authenticating from any .NET client.

Next up we’ll start looking at things like OAuth and OAuthWrap…

Alex James
Program Manager
Microsoft.

OData and Authentication – Part 8 – OAuth WRAP

$
0
0

OAuth WRAP is a claims based authentication protocol supported by the AppFabric Access Control (ACS) which is part of Windows Azure.

But most importantly it is REST (and thus OData) friendly too.

The idea is that you authenticate against an ACS server and acquire a Simple Web Token or SWT – which contains signed claims about identity / roles / rights etc – and then embed the SWT in requests to a resource server that trusts the ACS server.

The resource server then looks for and verifies the SWT by checking it is correctly signed, before allowing access based on the claims made in the SWT.

If you want to learn more about OAuth WRAP itself here’s the spec.

Goal

Now we know the principles behind OAuth WRAP it’s time to map those into the OData world.

Our goal is simple. We want an OData service that uses OAuth WRAP for authorization and a client to test it end to end.

Why OAuth WRAP?

You might be wondering why this post covers OAuth WRAP and not OAuth 2.0.

OAuth 2.0 essentially combines the best features of OAuth 1.0 and OAuth WRAP.

Unfortunately OAuth 2.0 is not yet a ratified standard, so ACS doesn’t support it yet. On the other hand OAuth 1.0 is cumbersome for RESTful protocols like OData. So that leaves OAuth WRAP.

However once it is ratified OAuth 2.0 will essentially depreciate OAuth WRAP and ACS will rev to support it. When that happens you can expect to see a new post in this Authentication Series.

Strategy

First we’ll provision an ACS server to act as our identity server.

Next we’ll configure our identity server with appropriate roles, scopes and claim transformation rules etc.

Then we’ll create a HttpModule (see part 5) to intercept all requests to the server, which will crack open the SWT, convert it into an IPrincipal and store it in HttpContext.Current.Request.User. This way it can be accessed later for authorization purposes inside the Data Service.

Then we’ll create a simple OData service using WCF Data Services and protect it with a custom HttpModule.

Finally we’ll write client code to authenticate against the ACS server and acquire a SWT token. We’ll use the techniques you saw in part 3 to send the SWT as part of every request to our OData services.

Step 1 – Provisioning an ACS server

First you’ll need an Windows Azure account and a running AppFabric namespace.

Once your namespace is running you also have a running ACS server.

Step 2 – Configuring the ACS server

To correctly configure the ACS server you’ll need to Install the Windows Azure Platform AppFabric SDK which you can find here.

ACM.exe is a command line tool that ships as part of the AppFabric SDK, and that allows you to create Issuers, TokenPolicies, Scopes and Rules.

For an introduction to ACM.exe and ACS look no further than this excellent guide by Keith Brown.

To simplify our acm commands you should edit your ACM.exe.config file to include information about your ACS like this:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <appSettings>
    <add key="host" value="accesscontrol.windows.net"/>
    <add key="service" value="{Your service namespace goes here}"/>
    <add key="mgmtkey" value="{Your Windows Azure Management Key goes here}"/>
  </appSettings>
</configuration>

Doing this saves you from having to re-enter this information every time you run ACM.

Very handy.

Claims Transformation

Before we start configuring our ACS we need to know a few principles…

Generally claims authentication is used to translate a set of input claims into a signed set of output claims.

Sometimes this extends to Federation, which allows trust relationships to be established between identity providers, such that a user on one system can gain access to resources on another system.

However in this blog post we are going to keep it simple and skip federation. 

Don’t worry though we’ll add federation in the next post.

Issuers

In ACS terms an Issuer represents a security principal. And whether we want federation or not our first step is to create a new issuer like this:

> acm create issuer
    -name: partner
    -issuername: partner
    -autogeneratekey
 

This will generate a key which you can retrieve by issuing this command:

> acm getall issuer
     
Count: 1

         id: iss_89f12a7ed023c3b7b0a85f32dff96fed2014ad0a
       name: odata-issuer
issuername: odata-issuer
        key: 9QKoZgtxxU4ABv8uiuvaR+k0cOmUxfEOE0qfPK2lCJY=
previouskey: 9QKoZgtxxU4ABv8uiuvaR+k0cOmUxfEOE0qfPK2lCJY=
  algorithm: Symmetric256BitKey

Our clients are going to need to know this key, so make a note of it for later.

Token Policy

Next we need a token policy. Token Policies specify a timeout indicating how long a new Simple Web Token (or SWT) should be valid, or put another way, how long before the SWT expires.

When creating a token policy you need to balance security versus ease of use and convenience. The shorter the timeout the more likely it is to be based on up to date Identity and Role information, but that comes at the cost of frequent refreshes, which have performance and convenience implications.

For our purposes a timeout of 1 hour is probably about right. So we create a new policy like this:

> acm create tokenpolicy
    -name: odata-service-policy
    -timeout: 3600
    -autogeneratekey

Where 3600 is the number of seconds in an hour. To see what you created issue this command:

> acm getall tokenpolicy
  Count: 1

     id: tp_aaf3fd9ca64d4471a5c7b5c572c087fb
   name: odata-service-policy
timeout: 3600
    key: WRwJkQ9PgbhnIUgKuuovw/6yVAo/Dh0qrb7rqQWnsBk=

We’ll need both the id and key later.

This key is what we share with our resource servers, so that they can check SWTs are correctly signed. We’ll come back to that later.

Scope

A service may have multiple ‘scopes’ each with a different set of access rules and rights.

Scopes are linked to a token policy, telling ACS how long SWTs should remain valid, how to sign the SWT, and scopes contain a set of rules which tell ACS how to translate incoming claims into claims embedded in the SWT.

When requesting a SWT a client must include an ‘applies_to’ parameter, which tells ACS for which scope they need a SWT, and consequently which token policy and rules should apply when constructing the SWT.

Here are just some of the reasons you might need multiple scopes:

  • A multi-tenant resource server would probably need different rules per tenant.
  • A single-tenant resource server with distinct sets of independently protected resources.

But for our purposes one scope is enough.

> acm create scope
    -name: odata-service-scope
    -appliesto:http://odata.mydomain.com
    -tokenpolicyid:tp_aaf3fd9ca64d4471a5c7b5c572c087fb

For ‘appliesto’ I chose the url for our planned OData service. Notice too that we bind the scope to the token policy we just created via it’s id.

You can retrieve this scope by executing this:

> acm getall scope
             Count: 1

                id: scp_c028015be790fb5d3ead59307bb3e537d586eac0
              name: odata-service
         appliesto:
http://odata.mydomain.com
     tokenpolicyid: tp_d8c65f770fb14a90bc707e958a722df9

You’ll need to know the scopeid to add Rules to the scope.

Rules

ACS has one real job, which you could sum up with these four words: “Claims in, claims out”. Essentially ACS is just a claims transformation engine, and the transformation is achieved by applying a series of rules.

The rules are associated with a scope, and tell ACS how to transform input claims for the target scope (via applies_to) into signed output claims.

In our simple example, all we really want to do is this: ‘If you know the key of my issuer, we’ll sign a claim that you are a ‘User’.

To do that we need this rule:

> acm create rule
-name:partner-is-user
-scopeid:scp_c028015be790fb5d3ead59307bb3e537d586eac0
-inclaimissuerid:iss_89f12a7ed023c3b7b0a85f32dff96fed2014ad0a
-inclaimtype:Issuer
-inclaimvalue:partner
-outclaimtype:Roles
-outclaimvalue:User

"Issuer" is a special type of input claim type (normally input claim type is just a string that needs to be found in an incoming SWT) that says anyone who demonstrates direct knowledge of the issuer key will receive a SWT that includes that output claim specified in the rule*.

So this particular rule means anyone who issues an OAuth WRAP request with the Issuer name as the wrap_name and the Issuer key as the wrap_password will receive a signed SWT that claims their "Roles=User".

*NOTE: there are other ways that this rule particular can match, but they are outside the scope of this blog post, check out this excellent guide by Keith Brown for more.

To test that our rule is working try this:

WebClient client = new WebClient();
client.BaseAddress = "
https://{your-namespace-goes-here}.accesscontrol.windows.net";

NameValueCollection values = new NameValueCollection();
values.Add("wrap_name", "partner");
values.Add("wrap_password", "9QKoZgtxxU4ABv8uiuvaR+k0cOmUxfEOE0qfPK2lCJY=");
values.Add("wrap_scope", "
http://odata.mydomain.com");

byte[] responseBytes = client.UploadValues("WRAPv0.9", "POST", values);

string response = Encoding.UTF8.GetString(responseBytes);
string token = response.Split('&')
    .Single(value => value.StartsWith("wrap_access_token="))
    .Split('=')[1];

Console.WriteLine(token);

When I run that code get this:

Roles%3dUser%26Issuer%3dhttps%253a%252f%252ffabrikamjets.accesscontrol.windows.net%252f%26Audience%3dhttp%253a%252f%252fodata.mydomain.com%26ExpiresOn%3d1282071821%26HMACSHA256%3d%252bc2ZiBpm74Etw%252bAkXY1jNwme8acHfIYd9AAtGMckoss%253d

As you can see the Roles%#dUser is simply a UrlEncoded version of Roles=User, so assuming this is a correctly signed SWT (more on that in Step 3) our rule appears to be working.

Step 3 – Creating the OAuth WRAP HttpModule

Now we have our ACS server correctly configured the next step is to create a HttpModule to crack open SWTs and map them into principles for use inside Data services.

Lets just take the code we wrote in parts 4 & 5 and rework it for OAuth WRAP, firstly by creating a OAuthWrapHttpModule that looks like this:

public class OAuthWrapAuthenticationModule : IHttpModule
{
    public void Init(HttpApplication context)
    {
        context.AuthenticateRequest +=
           new EventHandler(context_AuthenticateRequest);
    }
    void context_AuthenticateRequest(object sender, EventArgs e)
    {
        HttpApplication application = (HttpApplication)sender;
        if (!OAuthWrapAuthenticationProvider.Authenticate(application.Context))
        {
            Unauthenticated(application);
        }

    }
    void Unauthenticated(HttpApplication application)
    {
        // you could ignore this and rely on authorization logic to
        // intercept requests etc. But in this example we fail early.
        application.Context.Response.Status = "401 Unauthorized";
        application.Context.Response.StatusCode = 401;
        application.Context.Response.AddHeader("WWW-Authenticate", "WRAP");
        application.CompleteRequest();
    }
    public void Dispose() { }
}

As you can see this relies on an OAuthWrapAuthenticationProvider which looks like this:

public class OAuthWrapAuthenticationProvider
{
    static TokenValidator _validator = CreateValidator();

    static TokenValidator CreateValidator()
    {
        string acsHostname =
            ConfigurationManager.AppSettings["acsHostname"];
        string serviceNamespace =
            ConfigurationManager.AppSettings["serviceNamespace"];
        string trustedAudience =
            ConfigurationManager.AppSettings["trustedAudience"];
        string trustedSigningKey = 
            ConfigurationManager.AppSettings["trustedSigningKey"];

        return new TokenValidator(
           acsHostname,
           serviceNamespace,
           trustedAudience,
           trustedSigningKey
        );
    }
    public static TokenValidator Validator
    {
        get { return _validator; }
    }

    public static bool Authenticate(HttpContext context)
    {
        if (!HttpContext.Current.Request.IsSecureConnection) 
            return false;

        if (!HttpContext.Current.Request.Headers.AllKeys.Contains("Authorization"))
            return false;

        string authHeader = HttpContext.Current.Request.Headers["Authorization"];

        // check that it starts with 'WRAP'
        if (!authHeader.StartsWith("WRAP "))
        {
            return false;
        }
        // the header should be in the form 'WRAP access_token="{token}"'
        // so lets get the {token}
        string[] nameValuePair = authHeader
                                    .Substring("WRAP ".Length)
                                    .Split(new char[] { '=' }, 2);

        if (nameValuePair.Length != 2 ||
            nameValuePair[0] != "access_token" ||
            !nameValuePair[1].StartsWith("\"") ||
            !nameValuePair[1].EndsWith("\""))
        {
            return false;
        }

        // trim off the leading and trailing double-quotes
        string token = nameValuePair[1].Substring(1, nameValuePair[1].Length - 2);

        if (!Validator.Validate(token))
            return false;

        var roles = GetRoles(Validator.GetNameValues(token));

        HttpContext.Current.User = new GenericPrincipal(
            new GenericIdentity("partner"),
            roles
        );               
        return true;
    }
    static string[] GetRoles(Dictionary<string, string> nameValues)
    {
        if (!nameValues.ContainsKey("Roles"))
            return new string[] { };
        else
            return nameValues["Roles"].Split(',');
    }
}

As you can see the Authenticate method does a number of things:

  • Verifies we are using HTTPS because it would be insecure to pass SWT tokens around over straight HTTP.
  • Verifies that the authorization header exists and it is a WRAP header.
  • Extracts the SWT token from the authorization header.
  • Asks a TokenValidator to validate the token. More on this in a second.
  • Then extracts the Roles claims from the token (it assumes there is a Roles claim that contains a ',' delimited list of roles).
  • Finally if every check passes it constructs a GenericPrincipal, with a hard coded identity set to ‘partner’, and the list of roles found in the SWT and assigns it to HttpContext.Current.User.

In our example the identity itself is hard coded because currently our ACS rules don’t make any claims about the username, it just has role claims. Clearly though if we added more ACS rules you could include a username claim too.

The TokenValidator used in the code above is lifted from Windows Azure AppFabric v1.0 C# samples, which you can find here. If you download and unzip these samples you’ll find the TokenValidator here:

~\AccessControl\GettingStarted\ASPNETStringReverser\CS35\Service\App_Code\TokenValidator.cs

Our create CreateValidator() method creates a shared instance of the TokenValidator, and as you can see we are pulling these settings from web.config:

<configuration>
  …
  <appSettings>
     <add key="acsHostName" value="accesscontrol.windows.net"/>
     <add key="serviceNamespace" value="{your namespace goes here}"/>
     <add key="trustedAudience" value="
http://odata.mydomain.com"/>
     <add key="trustedSigningKey" value="{your token policy key goes here}>
   </appSettings>
   …
</configuration>

The most interesting one is the trustedSigningKey. 

This is a key shared between ACS and the resource server (in our case our HttpModule). It is the key from the token policy we created in step 2.

The ACS server uses the token policy key to create a hash of the claims (or HMACSHA256) which gets appended to the claims to complete the SWT. Then to verify that the SWT and its claims are valid the resource server simply re-computes the hash and compares.

Now that we’ve got our module we simply need to register it with IIS via the web.config like this:

<configuration>
  …
  <system.webServer>
     <modules>
       <add name="OAuthWrapAuthenticationModule"
                 type="SimpleService.OAuthWrapAuthenticationModule"/>
     </modules>
   </system.webServer>

</configuration>

Step 4 – Creating an OData Service

Next we need to add (if you haven’t already) an OData Service.

There are lots of ways to create an OData Service using WCF Data Services. But by far the easiest way to create a read/write service is using the Entity Framework like this.

Now because we’ve converted the OAuth WRAP SWT into a GenericPrincipal by the time requests hit our Data Service all the authorization techniques we already know using QueryInterceptors and ChangeIntercepts are still applicable.

So you could easily write code like this:

[QueryInterceptor("Orders")]
public Expression<Func<Order, bool>> OrdersFilter()
{        
    if (!HttpContext.Current.Request.IsAuthenticated)
        return (Order o) => false;
   
var user = HttpContext.Current.User;
    if (user.IsInRole("User"))
        return (Order o) => true;
    else
        return (Order o) => false; 
}

And of course you can rework the HttpModule and interceptors as needed if your claims get more involved.

Step 5 – Acquiring and using a SWT Token

The final step is to write a client that will send a valid SWT with each OData request.

In part 3 we explored the available client-side hooks. So we know that we can hook up to the DataServiceContext.SendingRequest like this:

ctx.SendingRequest +=new EventHandler<SendingRequestEventArgs>(OnSendingRequest);

And in our event hander we can add headers to the outgoing request. For OAuth WRAP we need to add a authorization header in the form:

Authorization:WRAP access_token="{YOUR SWT GOES HERE}"

NOTE: the double quotes (") are actually part of the format, but the curly bracked ({) are not. See the string.Format call below if you have any doubts.

So our OnSendingRequest event handler looks like this:

static void OnSendingRequest(object sender, SendingRequestEventArgs e)
{
    e.RequestHeaders.Add(
        "Authorization",
        string.Format("WRAP access_token=\"{0}\"", GetToken())
    );
}

As you can see this uses GetToken() to acquire the actual SWT:

static string GetToken()
{
    if (_token == null){
       WebClient client = new WebClient();
       client.BaseAddress =
          "
https://{your-namespace-goes-here}.accesscontrol.windows.net";
       NameValueCollection values = new NameValueCollection();
       values.Add("wrap_name", "partner");
       values.Add("wrap_password", "{Issuer Key goes here}");
       values.Add("wrap_scope", "
http://odata.mydomain.com");
       byte[] responseBytes = client.UploadValues("WRAPv0.9", "POST", values);
       string response = Encoding.UTF8.GetString(responseBytes);
       string token = response.Split('&')
        .Single(value => value.StartsWith("wrap_access_token="))
        .Split('=')[1];

      _token = HttpUtility.UrlDecode(token); 
   }
   return _token;
}
static string _token = null;

As you can see we acquire the SWT once (by demonstrating knowledge of the Issuer key)and assuming that is successful we cache it for later reuse.

Finally if we issue queries like say this:

try
{
    foreach (Order order in ctx.Orders)
        Console.WriteLine(order.Number);
}
catch (DataServiceQueryException ex)
{
    //var scheme = ex.Response.Headers["WWW-Authenticate"];
    var code = ex.Response.StatusCode;
    if (code == 401)
        _token = null;
}

And our token has expired, as it will after 60 minutes, an exception will occur and we can just null out the cached SWT and any retries will force our code to acquire a new SWT. 

Summary

In this post we’ve come a long way. We’ve now got a simple OData and OAuth WRAP authentication scenario working end to end.

It is a good foundation to build upon. But there are a few things we can do to make it better.

We could:

  • Configure our ACS to federate identities across domains, and configure our client code to do SWT exchange to go from one domain to another.
  • Create an expiring cache of Principals so that we don’t need to re-validate everytime a new request is received.
  • Upgrade our Principal object so it can handle more general claims rather than just User/Roles.

We’ll address these issues in Part 9.

Alex James
Program Manager
Microsoft


OData and OAuth – protecting an OData Service using OAuth 2.0

$
0
0

In this post you will learn how to create an OData service that is protected using OAuth 2.0, which is the OData team’s official recommendation in these scenarios:

  • Delegation: In a delegation scenario a third party (generally an application) is granted access to a user’s resources without the user disclosing their credentials (username and password) to the third party.
  • Federation: In a federation scenario a user’s credentials on one domain (perhaps their corporate network) implies access to resources on a resource domain (say a data provider). They key though is that the credentials used (if any) on the resource domain are not disclosed to the end users and the user never discloses their credentials to the resource domain either.

 So if your scenarios is one of the above or some slight variation we recommend that you use OAuth 2.0 to protect your service, it provides the utmost flexibility and power.

To explore this scenario we are going to walkthrough a real-world scenario, from end to end.

The Scenario

We’re going to create an OData service based on this Entity Framework model for managing a user’s Favorite Uris:

image

As you can see this is a pretty simple model with just Users and Favorites.

Our service should not require its own username and password, which is a sure way to annoy users today. Instead it will rely on well-known third parties like Google and Yahoo, to provide the users identity. We’ll use AppFabric Access Control Services (aka ACS) because it provides an easy way to bridge these third parties claims and rewrite them as a signed OAuth 2.0 Simple Web Token or SWT.

The idea is that we will trust email-address claims issued by our ACS service via a SWT in the Authorization header of the request. We’ll then use a HttpModule to convert that SWT into a WIF ClaimsPrincipal.

Then our service’s job will be to map the EmailAddress in the incoming claim to a User entity in the database via the User’s EmailAddress property, and use that to enforce Business Rules.

Business Rules

We need our Data Service to:

  • Automatically create a new user whenever someone with an unknown email-address hits the system.
  • Allow only administrators to query, create, update or delete users.
  • Allow Administrators to see all favorites.
  • Allow Administrators to update and delete all favorites.
  • Allow Users to see public favorites and their private favorites.
  • Allow Users to create new favorites. But the OwnerId, CreateDate and ‘Public’ values should be set for them, i.e. what they user sends on the wire will be ignored.
  • Allow Users to edit and delete only their favorites.
  • Allow un-authenticated requests to query only public favorites.

Implementation

Prerequisites

  • Windows Server 2008 R2 or Windows 7
  • Visual Studio 2010
  • Internet Information Services (IIS) enabled with IIS Metabase and IIS6 Configuration Compatibility
  • Windows Identity Foundation (WIF) (http://go.microsoft.com/fwlink/?LinkId=204657)
  • An existing Data Service Project that you want to protect.

Creating our Data Service

First we add a DataService that exposes our Entity Framework model like this:

public class Favorites : DataService<FavoritesModelContainer>
{
   // This method is called only once to initialize service-wide policies.
   public static void InitializeService(DataServiceConfiguration config)
   {
      config.SetEntitySetAccessRule("*", EntitySetRights.All);
      config.SetEntitySetPageSize("*", 100);
      config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;
   }
}
 

Authentication

Configuring ACS

You can use https://portal.appfabriclabs.com/ to create an AppFabric project, which will allow you to trial Access Control Services (or ACS) for free. The steps involved are: 

  1. Sign in with your LiveId (or create a new one). Once you’ve logged on you’ll see something like this:

    image
  2. Click the ‘create a project’ link and choose a name for your project.

    image
  3. Click on your new project:

    image
  4. Click ‘Add Service Namespace’ and choose a service namespace that is available:

    image

    Then you will see this:

    image
  5. You’ll have to wait about 20 seconds for Azure to provision your namespace. Once it is active click on the ‘Access Control’ the link:

    image
  6. Click on ‘Identity Providers’ which will allow you to configure ACS to accept identities from Google, Yahoo and Live, by clicking ‘Add Identity Provider’ on the screen below:

    image
  7. Once you’ve added Google and Yahoo click on ‘Return to Access Control Service’
  8. Click on ‘Relying Party Applications’:

    image

    NOTE: As you can see there is already a ‘Relying Party Application’ called AccessControlManagement. That is the application we are currently using that manages our ACS instance. It trusts our ACS to make claims about the current user’s identity.

    As you can see this management application thinks I am an administrator (top right corner). This is because I logged on to AppFabric using LiveId as odatademo@hotmail.com who is the owner of this Service Namespace.

    Now we can create a relying party – i.e. something to represent our OData favorites service – which will ‘rely’ on ACS to make claims about who is making the request, to do this:
  9. Click on ‘Add Relying Party Application’.

    image
    image
  10. Fill in the form like this and then click ‘Save’

    Name: Choose a name that represents your Application
    Realm: Choose the ‘domain’ that you intend to host your application at. This will work even if you are testing on localhost first so long as web.config settings that control your OAuth security module match.
    Return URL: Choose some url relative to your domain, like in the above sample. Note this is not need by the server, this is only needed when we write a client – which we will do in the next blog post. You *will* need to change this value as you move from testing to live deployment, because your clients will actually follow this link.
    Error URL: Leave this blank
    Token format:
    Choose SWT (i.e. a Simple Web Token which can be embedded in request headers).
    Token lifetime (secs): Leave at the default.
    Identity providers: Leave the default.
    Rule groups: Leave the default.
    Token signing key: Click ‘Generate’ to produce a key or paste an existing key in.
    Effective date: Leave the default.
    Expiration date: Leave the default.
  11. Click on ‘Return to Access Control Service’.
  12. Click on ‘Rule Groups’
  13. Click on ‘Default Rule Group for [your relying party]’

    image
  14. Click on ‘Generate Rules’
  15. Leave all Identity Providers checked and click the ‘Generate’ button.

    You should see these rules get generated automatically:

    image

This set of rules will take claims from Google, Yahoo and Windows Live Id, and pass them through untouched by sign then with the Token Signing Key we generated earlier.


Notice that LiveId claims don’t include an ‘emailaddress’ or ‘name’, so if we want to support LiveId our OAuth module on the server will need to figure out a way to convert a ‘nameidentifier’ claim into a ‘name’ and ‘emailaddress’ which is beyond the scope of this blog post.

At this point we’ve finished configuring ACS, and we can configure our OData Service to trust it.

Server Building Blocks

We will rely on a sample the WIF team recently released that includes a lot of useful OAuth 2.0 helper code. This code builds on WIF adding some very useful extensions.

The most useful code for our purposes is a class called OAuthProtectionModule. This is a HttpModule that converts claims made via a Simple Web Token (SWT) in the incoming request’s Authorization header into a ClaimsPrincipal which it then assigns to HttpContext.Current.User.

If you’ve been following the OData and Authentication series, this general approach will be familiar to you. It means that by the time calls get to your OData service the HttpContext.Current.User has the current user (if any) and can be used to make decisions about whether to authorize the request.

Configuration

There is a lot of code in the WIF sample that we don’t need. All you really need is the OAuthProtectionModule, so my suggestion is you pull that out into a separate project and grab classes from the sample as required. When I did that I moved things around a little and ended up with something that looked like this:

image

You might want to simplify the SamplesConfiguration class too, to remove unnecessary configuration information. I also decided to move the actual configuration into the web.config. When you make those changes you should end up with something like this:

public static class SamplesConfiguration
{
   public static string ServiceNamespace
   {   
      get
      {
         return ConfigurationManager.AppSettings["ServiceNamespace"];
      }
   }

   public static string RelyingPartyRealm
   {
      get
      {
         return ConfigurationManager.AppSettings["RelyingPartyRealm"];
      }
   }

   public static string RelyingPartySigningKey
   {
      get
      {
         return ConfigurationManager.AppSettings["RelyingPartySigningKey"];
      }
   }

   public static string AcsHostUrl
   {
      get
      {
         return ConfigurationManager.AppSettings["AcsHostUrl"];
      }
   }
}

Then you need to add your configuration information to your web.config:

<!-- this is the Relying Party signing key we generated earlier, i.e. the key ACS will use to sign the SWT – 
     that our module can verify by signing and compariing -->
<add key="RelyingPartySigningKey" value="cx3SesVUdDE0yGYD+86BLzyffu0xPBRGUYR4wKPpklc="/>
<!-- the dns name of the SWT issuer -->
<add key="AcsHostUrl" value="accesscontrol.appfabriclabs.com"/>
<!-- this is the your ACS ServiceNamespace of your OData service -->
<add key="ServiceNamespace" value="odatafavorites"/>
<!-- this is the intented url of your service (you don’t need to use a local address during development
     it isn’t verified -->
<add key="RelyingPartyRealm" value="http://favorites.odata.org/"/>

 With these values in place the next step is to enable the OAuthProtectionModule too.

<system.webServer>
   <validation validateIntegratedModeConfiguration="false" />
   <modules runAllManagedModulesForAllRequests="true">
      <add name="OAuthProtectionModule" preCondition="managedHandler"
type="OnlineFavoritesSite.OAuthProtectionModule"/>
   </modules>
</system.webServer>

 With this in place any requests that include a correctly signed SWT in the Authorization header will have the HttpContext.Current.User set by the time you get into Data Services code.

Now we just need a function to pull back a User (from the Database) based on the EmailAddress claim contained in the HttpContext.Current.User by calling GetOrCreateUserFromPrinciple(..).

Per our business requirements this function automatically creates a new non-administrator user whenever a new EmailAddress is encountered. It talks to the database using the current ObjectContext which it accesses via DataService.CurrentDataSource.

public User GetOrCreateUserFromPrincipal(IPrincipal principal)
{
   var emailAddress = GetEmailAddressFromPrincipal(principal);
   return GetOrCreateUserForEmail(emailAddress);
}

private string GetEmailAddressFromPrincipal(IPrincipal principal)
{
   if (principal == null) return null;
   else if ((principal is GenericPrincipal))
      return principal.Identity.Name;
   else if ((principal is IClaimsPrincipal))
      return GetEmailAddressFromClaim(principal as IClaimsPrincipal);
   else
      throw new InvalidOperationException("Unexpected Principal type");
}

private string GetEmailAddressFromClaim(IClaimsPrincipal principal)
{
   if (principal == null)
      throw new InvalidOperationException("Need a claims principal to extract EmailAddress claim"); 

   var emailAddress = principal.Identities[0].Claims
      .Where(c => c.ClaimType == "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress")
      .Select(c => c.Value)
      .SingleOrDefault();

   return emailAddress;
}

private User GetOrCreateUserForEmail(string emailAddress)
{
   if (emailAddress == null)
      throw new InvalidOperationException("Need an emailaddress");

   var ctx = CurrentDataSource as FavoritesModelContainer;
   var user = ctx.Users.WhereDbAndMemory(u => u.EmailAddress == emailAddress).SingleOrDefault();
   if (user == null)
   {
      user = new User
      {
         Id = Guid.NewGuid(),
         EmailAddress = emailAddress,
         CreatedDate = DateTime.Now,
         Administrator = false
      };
      ctx.Users.AddObject(user);
   }
   return user;
}

Real World Note:

One thing that is interesting about this code is the call to WhereDbAndMemory(..) in GetOrCreateUserForEmail(..). Initially it was just a normal Where(..) call.

But that introduced a pretty sinister bug.

It turned out that often my query interceptors / change interceptors where being called multiple times in a single request and because this method creates a new user without saving it to the database every time it is called, it was creating more than one user for the same emailAddress. Which later failed the SingleOrDefault() test.

The solution is to look for any unsaved Users in the ObjectContext, before creating another User. To do this I wrote a little extension method that allows you to query both the Database and unsaved changes in one go:

 public static IEnumerable<T> WhereDbAndMemory<T>(
   this ObjectQuery<T> sequence,
   Expression<Func<T, bool>> filter) where T: class
{
   var sequence1 = sequence.Where(filter).ToArray();
   var state = EntityState.Added | EntityState.Modified | EntityState.Unchanged;
   var entries = sequence.Context.ObjectStateManager.GetObjectStateEntries(state);
   var merged = sequence1.Concat(
      entries.Select(e => e.Entity).OfType<T>().Where(filter.Compile())
   ).Distinct();
   return merged;
}

By using this function we can be sure to only ever create one User for a particular emailAddress. 

Authorization

To implement our required business rules we need to create a series of Query and Change Interceptors that allow different users to do different things.

Our first interceptor controls who can query users:

[QueryInterceptor("Users")]
public Expression<Func<User, bool>> FilterUsers()
{
   if (!HttpContext.Current.Request.IsAuthenticated)
      throw new DataServiceException(401, "Permission Denied");
   User user = GetOrCreateUserFromPrincipal(HttpContext.Current.User);
   if (user.Administrator)
      return (u) => true;
   else
      throw new DataServiceException(401, "Permission Denied");
}

Per our requirement this only allows authenticated Administrators to query Users.

Next we need an interceptor that only allows administrators to modify a user:

[ChangeInterceptor("Users")]
public void ChangeUser(User updated, UpdateOperations operations)
{
   if (!HttpContext.Current.Request.IsAuthenticated)
      throw new DataServiceException(401, "Permission Denied");
   var user = GetOrCreateUserFromPrincipal(HttpContext.Current.User);
   if (!user.Administrator)
      throw new DataServiceException(401, "Permission Denied");
}

 And now we restrict access to Favorites:

[QueryInterceptor("Favorites")]
public Expression<Func<Favorite, bool>> FilterFavorites()
{
   if (!HttpContext.Current.Request.IsAuthenticated)
      return (f) => f.Public == true;
   var user = GetOrCreateUserFromPrincipal(HttpContext.Current.User);
   var emailAddress = user.EmailAddress;
   if (user.Administrator)
      return (f) => true;
   else
      return (f) => f.Public == true || f.User.EmailAddress == emailAddress;
}

 As you can see administrators see everything, users see their favorites and everything public, and non-authenticated requests get to see just public favorites.

Finally we control who can create, edit and delete favorites:

[ChangeInterceptor("Favorites")]
public void ChangeFavorite(Favorite updated, UpdateOperations operations)
{
   if (!HttpContext.Current.Request.IsAuthenticated)
      throw new DataServiceException(401, "Permission Denied");
   // Get the current USER or create the current user...
   var user = GetOrCreateUserFromPrincipal(HttpContext.Current.User);
   // Handle Inserts...
   if ((operations & UpdateOperations.Add) == UpdateOperations.Add)
   {
      // fill in the OwnerId, CreatedDate and Public properties
      updated.OwnerId = user.Id;
      updated.CreatedDate = DateTime.Now;
      updated.Public = false;
   }
   else if ((operations & UpdateOperations.Change) == UpdateOperations.Change)
   {
      // Administrators can do whatever they want.
      if (user.Administrator)
         return;
      // We don't trust the OwnerId on the wire (updated.OwnerId) because 
      // we should never do security checks based on something that the client
      // can modify!!!
      var original = GetOriginal(updated);
      if (original.OwnerId == user.Id)
      {
         // non-administrators can't modify these values.
         updated.OwnerId = user.Id;
         updated.CreatedDate = original.CreatedDate;
         updated.Public = original.Public;
         return;
     }

      // if we got here... they aren't allowed to do anything!
      throw new DataServiceException(401, "Permission Denied");
   }
   else if ((operations & UpdateOperations.Delete) == UpdateOperations.Delete)
   { 
      // in a delete operation you can’t update the OwnerId – it is impossible 
      // in the protocol, so it is safe to just check that.
      if (updated.OwnerId != user.Id && !user.Administrator)
  
      throw new DataServiceException(401, "Permission Denied");
  
}
}

Unauthenticated change requests are not allowed.

For additions we always set the ‘OwnedId’, ‘CreatedDate’ and ‘Public’ properties overriding whatever was sent on the wire.

For updates we allow administrators to make any changes, whereas owners can just edit their favorites, and they can’t change the ‘OwnerId’, ‘CreatedData’ or ‘Public’ properties.  

It is also very important to understand that we have to get the original values before we check to see if someone is the owner of a particular favorite. We do this using this function that leverages some low level Entity Framework code:

private Favorite GetOriginal(Favorite updated)
{
   // For MERGE based updates (which is the default) 'updated' will be in the
   // ObjectContext.ObjectStateManager.
   // For PUT based updates 'updated' will NOT be in the
   // ObjectContext.ObjectStateManager, but it will contain a copy
   // of the same entity.

   // So to normalize we should find the ObjectStateEntry in the ObjectStateManager
   // by EntityKey not by Entity.
   var entityKey = new EntityKey("FavoritesModelContainer.Favorites","Id", updated.Id);
   var entry = CurrentDataSource.ObjectStateManager.GetObjectStateEntry(entityKey);
   // Now we have the entity lets construct a copy with the original values.
   var original = new Favorite
   {
      Id = entry.OriginalValues.GetGuid(entry.OriginalValues.GetOrdinal("Id")),
      CreatedDate = entry.OriginalValues.GetDateTime(entry.OriginalValues.GetOrdinal("CreateDate")),
      Description = entry.OriginalValues.GetString(entry.OriginalValues.GetOrdinal("Description")),
      Name = entry.OriginalValues.GetString(entry.OriginalValues.GetOrdinal("Name")),
      OwnerId = entry.OriginalValues.GetGuid(entry.OriginalValues.GetOrdinal("OwnerId")),
      Public = entry.OriginalValues.GetBoolean(entry.OriginalValues.GetOrdinal("Public")),
      Uri = entry.OriginalValues.GetString(entry.OriginalValues.GetOrdinal("Uri")),
   };

   return original;
} 

This constructs a copy of the unmodified entity setting all the properties from the original values in the ObjectStateEntry. While we don’t actually need all the original values, I personally hate creating a function that only does half a job; it is a bug waiting to happen.

Finally administrators can delete any favorites but users can only delete their own.

Summary

We’ve gone from zero to hero in this example, all our business rules are implemented, our OData Service is protected using OAuth 2.0 and everything is working great. The only problem is we don’t have a working client.

So in the next post we’ll create a Windows Phone 7 application for our OData service that knows how to authenticate.

Alex James
Program Manager
Microsoft

Connecting to an OAuth 2.0 protected OData Service

$
0
0

This post creates a Windows Phone 7 client application for the OAuth 2.0 protected OData service we created in the last post.

Prerequisites:

To run this code you will need:

Our application:

Our application is a very basic Windows Phone 7 (WP7) application that allows you to browse favorites and if logged in create new favorites and see your personal favorites too. The key to enabling all this is authenticating our application against our OAuth 2.0 protected OData service, which means somehow acquiring a signed Simple Web Token (SWT) with the current user’s emailaddress in it.

Our application’s authentication experience:

When first started our application will show public favorites like this:

image

To see more or create new favorites you have to logon.
Clicking the logon button (the ‘gear’ icon) makes the application navigate to a page with an embedded browser window that shows a home realm discovery logon page – powered by ACS. This home realm discovery page allows the user to choose one of the 3 identity providers we previously configured our relying party (i.e. our OData Service) to trust in ACS: 

image

 When the user clicks on the identity provider they want to use, they will browse to that identity provider and be asked to Logon:

image

 Once logged on they will be asked to grant our application permission to use their emailaddress:

image

 If you grant access the browser redirects back to ACS and includes the information you agreed to share – in this case your email address.

Note: because we are using OAuth 2.0 and related protocols your Identity provider probably has a way to revoke access to your application too. This is what that looks like in Google:

 

image

 

The final step is to acquire a SWT signed using the trusted signing key (only ACS and the OData service know this key) so that we can prove our identity with the OData service. This is a little involved so we’ll dig into it in more detail when we look at the code – but the key takeaway is that all subsequent requests to the OData service will use this SWT to authenticate, and gain access to more information:

image

 

How this all works:

There is a fair bit of generic application code in this example which I’m not going to walk-through; instead we’ll just focus on the bits that are OData and authentication specific.

Generating Proxy Classes for your OData Service

Unlike Silverlight or Windows applications, we don’t have an ‘Add Service Reference’ feature for WP7 projects yet. Instead we need to generate proxy classes by hand using DataSvcUtil.exe, something like this:

DataSvcUtil.exe /out:"data.cs" /uri:"http://localhost/OnlineFavoritesSite/Favorites.svc"

Once you’ve got your proxy classes simply add them to your project.

I chose to create an ApplicationContext class to provide access to resources shared across different pages in the application. So this property, which lets people get to the DataServiceContext, hangs off that ApplicationContext:

public FavoritesModelContainer DataContext {
   get{
      if (_ctx == null)
      {
         _ctx = new FavoritesModelContainer(new Uri("http://localhost/OnlineFavoritesSite/Favorites.svc/"));
         _ctx.SendingRequest += new EventHandler<SendingRequestEventArgs>(SendingRequest);
      }
      return _ctx;
   }
}

 Notice that we’ve hooked up to the SendingRequest event on our DataServiceContext, so if we know we are logged (we have a SWT token in our TokenStore) we include it in the authentication header.

 void SendingRequest(object sender, SendingRequestEventArgs e)
{
   if (IsLoggedOn)
   {
      e.RequestHeaders["Authorization"] = "OAuth " + TokenStore.SecurityToken;
   }
}

Then whenever the home page is displayed or refreshed the Refresh() method is called:

 private void Refresh()
{
   AddFavoriteButton.IsEnabled = App.Context.IsLoggedOn;
   var favs = new DataServiceCollection<Favorite>(App.Context.DataContext);
   lstFavorites.ItemsSource = favs;
   favs.LoadAsync(new Uri("http://localhost/OnlineFavoritesSite/Favorites.svc/Favorites?$orderby=CreatedDate desc"));
   App.Context.Favorites = favs;
}

Notice that this code binds the lstFavorites control, used to display favorites, to a new DataServiceCollection that we load asynchronously via a hardcoded OData query uri. This means whenever Refresh() is executed we issue the same query, the only difference is that because of our earlier SendingRequest event handler if we are logged in we send an Authorization header too. 

NOTE: If you are wondering why I used a hand coded URL rather than LINQ to produce the query, it’s because the current version of the WP7 Data Services Client library doesn’t support LINQ. We are working to add LINQ support in the future.

Logging on

The logon button is handled by the OnSignIn event that navigates the app to the SignOn page:

private void OnSignIn(object sender, EventArgs e)
{
   if (!App.Context.IsLoggedOn)
   {
      NavigationService.Navigate(new Uri("/SignIn.xaml", UriKind.Relative));
   }
   else
   {
      App.Context.Logout();
      Refresh();
   }
}

The SignIn.xaml file is a modified version of the one in the Access Control Services Phone sample. As mentioned previously, it has an embedded browser (in fact the browser is embedded in a generic sign-on control called AccessControlServiceSignIn). The code behind the SignIn page looks like this:

public const string JSON_HRD_url = "https://odatafavorites.accesscontrol.appfabriclabs.com:443/v2/metadata/IdentityProviders.js?protocol=wsfederation&realm=http%3a%2f%2ffavourites.odata.org%2f&reply_to=http%3a%2f%2flocalhost%2fOnlineFavoritesSite%2fSecurity%2fAcsPostBack&context=&version=1.0&callback=";

public SignInPage()
{
   InitializeComponent();
}

private void PhoneApplicationPage_Loaded(object sender, RoutedEventArgs e)
{
   SignInControl.RequestSecurityTokenResponseCompleted += new EventHandler<RequestSecurityTokenResponseCompletedEventArgs>(SignInControl_GetSecurityTokenCompleted);
   SignInControl.GetSecurityToken(new Uri(JSON_HRD_url));
}

 This tells the control to GetSecurityToken from ACS. The JSON_HRD_url points a url exposed by ACS that returns the list of possible identity providers in JSON format.

The underlined and bolded part of the string corresponds to an MVC action we are going to add to our OData service website to get the SWT token into our WP7 application.

You can configure the url of your MVC action via the Relying Party screen for your application in IIS:

image

Once you’ve set the Return URL correctly, to get the JSON Home Realm discovery URL from ACS, click on ‘Application Integration’ and then click on ‘Logon Pages’ and then click on your Relying Party, then you should see something like this:

image

 The second URL is the one we need.

That’s essentially all we need on the client for security purposes. Remember though we need a page in our website that acts as the Return URL.

We choose this URL ‘http://localhost/OnlineFavoritesSite/Security/AcsPostBack’ to receive the response so we need to create a SecurityController with an AcsPostBack action something like this:

public class SecurityController : Controller
{
   const string Page = @"<html xmlns=""http://www.w3.org/1999/xhtml"">
<head runat=""server"">
<title></title>
<script type=""text/javascript"">
window.external.Notify('{0}');
</script>
</head>
<body>
</body>
</html>";
   //
   // POST: /Security/AcsPostBack
   [HttpPost]
   [ValidateInput(false)]
   public string AcsPostBack()
   {
      RequestSecurityTokenResponseDeserializer tokenResponse = new RequestSecurityTokenResponseDeserializer(Request);
      string page = string.Format(Page, tokenResponse.ToJSON());
      return page;
   }
}

This accepts a POST from ACS. Because ACS is on a different domain, we need to include [ValidateInput(false)] which allows Cross Site posts.

 The post from ACS will basically be a SAML token that will includes a set of claims about the current user. But because our Phone client is going to use REST we need to convert the SAML token which is not header friendly into a SWT token that is.

 The RequestSecurityTokenResponseDeserializer class (again from the ACS phone sample) does this for us. It repackages the claims as a SWT token and wraps the whole thing up in a bit of JSON so it can be embedded in a little Javascript code.

The html we return calls window.external.Notify('{0}') to get the token to the AccessControlServiceSignIn control in phone application, which then puts it in the TokenStore.SecurityToken so that it is available for future requests.

Once we have finally got the token this event fires:

void SignInControl_GetSecurityTokenCompleted(
   object sender,
   RequestSecurityTokenResponseCompletedEventArgs e)
{  
   if (e.Error == null)
   {
      if (NavigationService.CanGoBack)
      {
         NavigationService.GoBack();
      }
   }
}

This code takes us back to the Main page, which forces a refresh, which in turn re-queries the OData service, this time with the SWT token, which gives the user access to all their personal favorites and allows them to create new favorites.

Mission accomplished!

Summary

In this post you learned how to use ACS and the Data Services Client for WP7 to authenticate with and query an OAuth 2.0 protected OData service. If you have any questions let me know.

Alex James
Program Manager
Microsoft

WCF Data Services 5.4.0 Prerelease

$
0
0

Recently we uploaded an RC for our upcoming 5.4.0 release. This release will be NuGet packages only.

What is in the release:

Client deserialization/serialization hooks

We have a number of investments planned in the “request pipeline” area. In 5.4.0 we have a very big set of hooks for reaching into and modifying data as it is being read from or written to the wire format. These hooks provide extensibility points that enable a number of different scenarios such as modifying wire types, property names, and more.

Instance annotations on atom payloads

As promised in the 5.3.0 release notes, we now support instance annotations on Atom payloads. Instance annotations are an extensibility feature in OData feeds that allow OData requests and responses to be marked up with annotations that target feeds, single entities (entries), properties, etc. We do still have some more work to do in this area, such as the ability to annotate properties.

Client consumption of instance annotations

Also in this release, we have added APIs to the client to enable the reading of instance annotations on the wire. These APIs make use of the new deserialization/serialization pipelines on the client (see above). This API surface includes the ability to indicate which instance annotations the client cares about via the Prefer header. This will streamline the responses from OData services that honor the odata.include-annotations preference.

Simplified transition between Atom and JSON formats

In this release we have bundled a few less-noticeable features that should simplify the transition between the Atom and (the new) JSON format. (See also the bug fixes below on type resolver fixes.)

Bug fixes

In addition to the features above, we have included fixes for the following notable bugs:

  • Fixes an issue where reading a collection of complex values would fail if the new JSON format was used and a type resolver was not provided
  • Fixes an issue where ODataLib was not escaping literal values in IDs and edit links
  • Fixes an issue where requesting the service document with application/json;odata=nometadata would fail
  • Fixes an issue where using the new JSON format without a type resolver would create issues with derived types
  • (Usability bug) Makes it easier to track the current item in ODataLib in many situations
  • Fixes an issue where the LINQ provider on the client would produce $filter instead of a key expression for derived types with composite keys
  • (Usability bug) Fixes an issue where the inability to set EntityState and ETag values forced people to detach and attach entities for some operations
  • Fixes an issue where some headers required a case-sensitive match on the WCF DS client
  • Fixes an issue where 304 responses were sending back more headers than appropriate per the HTTP spec
  • Fixes an issue where a request for the new JSON format could result in an error that used the Atom format
  • Fixes an issue where it was possible to write an annotation value that was invalid according to the term
  • Fixes an issue where PATCH requests for OData v1/v2 payloads would return a 500 error rather than 405

 

What to expect over the next six months:

We will blog about this in more detail soon, but we have multiple releases planned that have some level of overlap. We should be publishing a 5.5.0 alpha soon (with additional URI parser functionality for Web API’s OData stack) and in a couple of months you should see a very early alpha of 6.0.0. We’re not ready to say much about 6.0.0 yet other than the fact that it will support OData v4 and will probably have some breaking changes, so we want to get it out there as soon as possible because…

We want your feedback

We always appreciate your comments on the blog posts, forums, Twitterverse and e-mail (mastaffo@microsoft.com). We do take your feedback seriously and prioritize accordingly. We would encourage you strongly to start pulling down these early bits, testing with your existing services, and telling us where things break, where we’ve gone too far, and where we haven’t gone far enough.

WCF Data Services 5.4.0 Release

$
0
0

Today we are releasing version 5.4.0 of WCF Data Services. As mentioned in the prerelease post, this release will be NuGet packages only. That means that we are not releasing an updated executable to the download center. If you create a new WCF Data Service or add a reference to an OData service, you should follow the standard procedure for making sure your NuGet packages are up-to-date. (Note that this is standard usage of NuGet, but it may be new to some WCF Data Services developers.)

Samples

If you haven’t noticed, we’ve been releasing a lot more frequently than we used to. As we adopted this rapid cadence, our documentation has fallen somewhat behind and we recognize that makes it hard for you to try out the new features. We do intend to release some samples demonstrating how to use the features below but we need a few more days to pull those samples together and did not want to delay the release. Once we get some samples together we will update this blog post (or perhaps add another blog post if we need more commentary than a gist can convey).

What is in the release:

Client deserialization/serialization hooks

We have a number of investments planned in the “request pipeline” area. In 5.4.0 we have a very big set of hooks for reaching into and modifying data as it is being read from or written to the wire format. These hooks provide extensibility points that enable a number of different scenarios such as modifying wire types, property names, and more.

Instance annotations on atom payloads

As promised in the 5.3.0 release notes, we now support instance annotations on Atom payloads. Instance annotations are an extensibility feature in OData feeds that allow OData requests and responses to be marked up with annotations that target feeds, single entities (entries), properties, etc. We do still have some more work to do in this area, such as the ability to annotate properties.

Client consumption of instance annotations

Also in this release, we have added APIs to the client to enable the reading of instance annotations on the wire. These APIs make use of the new deserialization/serialization pipelines on the client (see above). This API surface includes the ability to indicate which instance annotations the client cares about via the Prefer header. This will streamline the responses from OData services that honor the odata.include-annotations preference.

Simplified transition between Atom and JSON formats

In this release we have bundled a few less-noticeable features that should simplify the transition between the Atom and (the new) JSON format. (See also the bug fixes below on type resolver fixes.)

Bug fixes

In addition to the features above, we have included fixes for the following notable bugs:

  • Fixes an issue where reading a collection of complex values would fail if the new JSON format was used and a type resolver was not provided
  • Fixes an issue where ODataLib was not escaping literal values in IDs and edit links
  • Fixes an issue where requesting the service document with application/json;odata=nometadata would fail
  • Fixes an issue where using the new JSON format without a type resolver would create issues with derived types
  • (Usability bug) Makes it easier to track the current item in ODataLib in many situations
  • Fixes an issue where the LINQ provider on the client would produce $filter instead of a key expression for derived types with composite keys
  • (Usability bug) Fixes an issue where the inability to set EntityState and ETag values forced people to detach and attach entities for some operations
  • Fixes an issue where some headers required a case-sensitive match on the WCF DS client
  • Fixes an issue where 304 responses were sending back more headers than appropriate per the HTTP spec
  • Fixes an issue where a request for the new JSON format could result in an error that used the Atom format
  • Fixes an issue where it was possible to write an annotation value that was invalid according to the term
  • Fixes an issue where PATCH requests for OData v1/v2 payloads would return a 500 error rather than 405

We want your feedback

We always appreciate your comments on the blog posts, forums, Twitterverse and e-mail (mastaffo@microsoft.com). We do take your feedback seriously and prioritize accordingly. We are still early in the planning stages for 5.5.0 and 6.0.0, so feedback now will help us shape those releases.

WCF Data Services 5.5.0 Prerelease

$
0
0

It’s that time again: yesterday we uploaded an RC for the upcoming 5.5.0 release. The 5.5.0 release will be another NuGet-only release.

What is in the release:

This release has two primary features: 1) significant enhancements to the URI parser and 2) public data source providers.

URI Parser

In the 5.2.0 release ODataLib provided a way to parse $filter and $orderby expressions into a metadata-bound abstract syntax tree (AST). In the 5.5.0 release we have updated the URI parser with support for most OData URIs. The newly introduced support for parsing $select and $expand is particularly notable. With the 5.5.0 release the URI Parser is mostly done. Future releases will focus on higher-order functions to further improve the developer experience.

Note: We are still trying to determine what the right API is for $select and $expand. While the API may change before RTM, the feature is functionally complete.

Public Data Source Providers

In this release we have made the Entity Framework and Reflection data source providers public. This gives more control to service writers. There is more work planned in the future but the work we’ve completed allows some advanced scenarios which were not possible earlier. For example, a service writer can now make use of the Entity Framework query-caching feature by intercepting the request and parameterizing the LINQ query before handing it off to Entity Framework. (Note that parameterizing a LINQ query is not the same as parameterizing a SQL query; EF always does the latter and therefore there is no security implications to failing to parameterize a LINQ to Entities query, the only impact is performance related.)

While the potential unlocked with this release is limited, this is the first move in a direction which will unlock many previously unachievable scenarios with the built in providers.

WCF Data Services 5.5.0 Release

$
0
0

WCF Data Services 5.5.0 has officially been released! The 5.5.0 release will be another NuGet-only release as we did not make any updates to the Visual Studio tooling.

The last tooling update was version 5.3.0. Services created using this version of tooling should update the runtime binaries to 5.5.0 with NuGet.

What is in the release:

This release has two primary features: 1) significant enhancements to the URI parser and 2) public data source providers. In addition to the primary features, there are two secondary enhancements and over 40 bug fixes included with this release.

URI Parser

In the 5.2.0 release ODataLib provided a way to parse $filter and $orderby expressions into a metadata-bound abstract syntax tree (AST). In the 5.5.0 release we have updated the URI parser with support for most OData URIs. The newly introduced support for parsing $select and $expand is particularly notable. With the 5.5.0 release the URI Parser is mostly done. Future releases will focus on higher-order functions to further improve the developer experience. 

Public Data Source Providers

In this release we have made the Entity Framework and Reflection data source providers public. This gives more control to service writers. There is more work planned in the future but the work we’ve completed allows some advanced scenarios which were not possible earlier. For example, a service writer can now make use of the Entity Framework query-caching feature by intercepting the request and parameterizing the LINQ query before handing it off to Entity Framework. (Note that parameterizing a LINQ query is not the same as parameterizing a SQL query; EF always does the latter and therefore there is no security implications to failing to parameterize a LINQ to Entities query, the only impact is performance related.)

While the potential unlocked with this release is limited, this is the first move in a direction which will unlock many previously unachievable scenarios with the built in providers.

Performance Improvements

We constantly strive to improve performance and reliability with every release. In this release, we have increased performance by double digit percentages for service authors that want to respond with JSON but are not able to (or don’t want to) provide a data model to ODataLib.

Improved Developer Experience

In this release we have caught up with some missing IntelliSense guidance and we are uploading symbols and source to SymbolSource.org. We will upload symbols for many of our past releases as well.

Bug Fixes

In addition to the features above, we have included fixes for the following notable bugs:

  • Fixes an issue where the reflection provider would not work properly if the generic parameter provided to DataService<T> was an interface
  • Fixes an issue where some system headers were not being set when a client called BuildingRequest
  • Fixes an issue where setting InstanceContextMode to Single on DataService would result in cache result being returned for subsequent requests
  • Fixes an issue where ODataLib would sometimes allow null to be written whether or not the expected type was nullable
  • Fixes a regression in 5.4 where ODataLib started writing unnecessary type information in certain instances
  • Fixes an issue where the WCF DS client would sometimes not dispose the response if the response had no content
  • Improves a number of errors and error messages

WCF Data Services 5.6.0 Alpha

$
0
0

Today we are releasing updated NuGet packages and tooling for WCF Data Services 5.6.0. This is an alpha release and as such we have both features to finish as well as quality to fine-tune before we release the final version.

You will need the updated tooling to use the portable libraries feature mentioned below. The tooling installer is available from the download center.

What is in the release:

Visual Studio 2013 Support

The WCF DS 5.6.0 tooling installer has support for Visual Studio 2013. If you are using the Visual Studio 2013 Preview and would like to consume OData services, you can use this tooling installer to get Add Service Reference support for OData. Should you need to use one of our prior runtimes, you can still do so using the normal NuGet package management commands (you will need to uninstall the installed WCF DS NuGet packages and install the older WCF DS NuGet packages).

Portable Libraries

All of our client-side libraries now have portable library support. This means that you can now use the new JSON format in Windows Phone and Windows Store apps. The core libraries have portable library support for .NET 4.0, Silverlight 5, Windows Phone 8 and Windows Store apps. The WCF DS client has portable library support for .NET 4.5, Silverlight 5, Windows Phone 8 and Windows Store apps. Please note that this version of the client does not have tombstoning, so if you need that feature for Windows Phone apps you will need to continue using the Windows Phone-specific tooling.

URI Parser Integration

The URI parser is now integrated into the WCF Data Services server bits, which means that the URI parser is capable of parsing any URL supported in WCF DS. We are currently still working on parsing functions, with those areas of the code base expected to be finalized by RTW.

Public Provider Improvements

In the 5.5.0 release we started working on making our providers public. In this release we have made it possible to override the behavior of included providers with respect to properties that don’t have native support in OData v3. Specifically, you can now create a public provider that inherits from the Entity Framework provider and override a method to make enum and spatial properties work better with WCF Data Services. We have also done some internal refactoring such that we can ship our internal providers in separate NuGet packages. We hope to be able to ship an EF6 provider soon.

Known Issues

With any alpha, there will be known issues. Here are a few things you might run into:

  • We ran into an issue with a build of Visual Studio that didn’t have the NuGet Package Manager installed. If you’re having problems with Add Service Reference, please verify that you have a version of the NuGet Package Manager and that it is up-to-date.
  • We ran into an issue with build errors referencing resource assemblies on Windows Store apps. A second build will make these errors go away.

We want feedback!

This is a very early alpha (we think the final release will happen around the start of August), but we really need your feedback now, especially in regards to the portable library support. Does it work as expected? Can you target what you want to target? Please leave your comments below or e-mail me at mastaffo@microsoft.com. Thank you!


Using the new client hooks in WCF Data Services Client

$
0
0

What are the Request and Response Pipeline configurations in WCF Data Services Client?In WCF Data Services 5.4 we added a new pattern to allow developers to hook into the client request and response pipelines. In the server, we have long had the concept of a processing pipeline. Developers can use the processing pipeline event to tweak how the server processes requests and responses. This concept has now been added to the client (though not as an event). The feature is exposed through the Configurations property on the DataServiceContext. On Configurations there are two properties, called ResponsePipeline and RequestPipeline. The ResponsePipeline contains configuration callbacks that influence reading to OData and materializing the results to CLR objects. The RequestPipeline contains configuration callbacks that influence the writing of CLR objects to the wire. Developers can then build on top of the new public API and compose higher level functionality.

The explanation might be a bit abstract so let’s move look at a real world example. The code below will document how to remove a property that is unnecessary on the client or that causes materialization issues. Previously this was difficult to do, and impossible if the payload was returning the newer JSON format, but this scenario is now trivially possible with the new hooks. Below is a code snippet to remove a specific property:

This code is using the OnEntryRead response configuration method to remove the property. Behind the scenes what is happening is the Microsoft.Data.ODataReader calls reader.Read(). As it reads though the items, depending on the ODataItem type a call will be made to the all configuration callbacks of that type that are registered. A couple notes about this code:

  1. Since ODataEntry.Properties is an IEnumerable<ODataProperty> and not an ICollection<ODataProperty>, we need to replace the entire IEnumerable instead of just calling ODataEntries.Properties.Remove().
  2. ResolveType is used here to use the TypeName and get the EntityType, typically for a code generated DataServiceContext this method is automatically hooked up but if you are using a DataServiceContext directly then delegate code will need to be written.

What if this scenario has to occur for other properties on the same type or properties on a different type? Let’s make some changes to make this code a bit more reusable.

Extension method for removing a property from an ODataEntry:

Extension method for removing a property from the ODataEntry on the selected type:

And now finally the code that the developer would write to invoke the method above and set the configuration up:

The original code is now broken down and is more reusable. Developers can use the RemoveProperties extension above to remove any property from a type that is in the ODataEntry payload. These extension methods can also be chained together.

The example above shows how to use OnEntryEnded, but there are a number of other callbacks that can be used. Here is a complete list of configuration callbacks on the response pipeline:

All of the configuration callbacks above with the exception of OnEntityMaterialized and OnMessageReaderSettingsCreate are called when the ODataReader is reading through the feed or entry. The OnMessageReaderSettingsCreate callback is called just prior to when the ODataMessageWriter is created and before any of the other callbacks are called. The OnEntityMaterialized is called after a new entity has been converted from the given ODataEntry. The callback allows developers to apply any fix-ups to an entity after it was converted.

Now let’s move on to a sample where we use a configuration on the RequestPipeline to skip writing a property to the wire. Below is an example of an extension method that can remove the specified properties before it is written out:

As you can see we are following the same pattern as the extension method we wrote to RemoveProperties for the ResponsePipeline. In comparison to this extension method this function doesn’t require the type resolving func, so it’s a bit simpler. The type information is specified on the OnEntryEnding args in the Entity property. Again this example only touches on ODataEntryEnding. Below is the complete list of configuration callbacks that can be used:

With the exception of OnMessageWriterSettingsCreated, the other configuration callbacks are called when the ODataWriter is writing information to the wire.

In conclusion, the request and response pipelines offer ways to configure the how payloads are read and written to the wire. Let us know any other questions you might have to leverage this feature.

Chris Robinson – OData Team

WCF Data Services 5.6.0 Release

$
0
0

Recently we released updated NuGet packages for WCF Data Services 5.6.0. You will need the updated tooling (released today) to use the portable libraries feature mentioned below with code gen.

What is in the release:

Visual Studio 2013 Support

The WCF DS 5.6.0 tooling installer has support for Visual Studio 2013. If you are using Visual Studio 2013 and would like to consume OData services, you can use this tooling installer to get Add Service Reference support for OData. Should you need to use one of our prior runtimes, you can still do so using the normal NuGet package management commands (you will need to uninstall the installed WCF DS NuGet packages and install the older WCF DS NuGet packages).

Portable Libraries

All of our client-side libraries now have portable library support. This means that you can now use the new JSON format in Windows Phone and Windows Store apps. The core libraries have portable library support for .NET 4.0, Silverlight 5, Windows Phone 8 and Windows Store apps. The WCF DS client has portable library support for .NET 4.5, Silverlight 5, Windows Phone 8 and Windows Store apps. Please note that this version of the client does not have tombstoning, so if you need that feature for Windows Phone apps you will need to continue using the Windows Phone-specific tooling.

URI Parser Integration

The URI parser is now integrated into the WCF Data Services server bits, which means that the URI parser is capable of parsing any URL supported in WCF DS. We have also added support for parsing functions in the URI Parser.

Public Provider Improvements – Reverted

In the 5.5.0 release we started working on making our providers public. In this release we hoped to make it possible to override the behavior of included providers with respect to properties that don’t have native support in OData v3, for instance enum and spatial properties. Unfortunately we ran into some non-trivial bugs with $select and $orderby and needed to cut the feature for this release.

Public Transport Layer

In the 5.4.0 release we added the concept of a request and response pipeline to WCF Data Service Client. In this release we have made it possible for developers to directly handle the request and response streams themselves. This was built on top of ODataLib’s IODataRequestMessage and IODataResponseMessage framework that specifies how requests and responses are sent and recieved. With this addition developers are able to tweak the request and response streams or even completely replace the HTTP layer if they so desire. We are working on a blog post and sample documenting how to use this functionality.

Breaking Changes

In this release we took a couple of breaking changes. As these bugs are tremendously unlikely to affect anyone, we opted not to increment the major version number but we wanted everyone to be aware of what they were:

  • Developers using the reading/writing pipeline must write to Entry rather than Entity on the WritingEntryArgs
  • Developers should no longer expect to be able to modify the navigation property source in OnNavigationLinkStarting and OnNavigationLinkEnding
  • Developers making use of the DisablePrimitiveTypeConversion knob may see a minor change in their JSON payloads; the knob previously only worked for the ATOM format

Bug Fixes

  • Fixes a performance issue with models that have lots of navigation properties
  • Fixes a performance issue with the new JSON format when creating or deleting items
  • Fixes a bug where DisablePrimitiveTypeConversion would cause property type annotations to be ignored in the new JSON format
  • Fixes a bug where LoadProperty does not remove elements from a collection after deleting a link
  • Fixes an issue where the URI Parser would not properly bind an action to a collection of entities
  • Improves some error messages

Known Issues

The NuGet runtime in Visual Studio needs to be 2.0+ for Add Service Reference to work properly. If you are having issues with Add Service Reference in Visual Studio 2012, please ensure that NuGet is up-to-date.

Using WCF Data Services 5.6.0 with Entity Framework 6+

$
0
0

And now for some exciting news: you can finally use WCF Data Services with Entity Framework 6+! Today we are uploading a new NuGet package called WCF Data Services Entity Framework Provider. This NuGet package bridges the gap between WCF Data Services 5.6.0 and Entity Framework 6+. We were able to build this provider as an out-of-band provider (that is, a provider that ships apart from the core WCF DS stack) because of the public provider work we did recently.

Upgrading an existing OData service to EF 6

If you are upgrading an existing OData service to Entity Framework 6 or greater, you will need to do a couple of things:

  1. Install the WCF Data Services Entity Framework Provider NuGet package. Since this package has a dependency on WCF Data Services 5.6.0 and Entity Framework 6 or greater, some of the other NuGet packages in your project may be upgraded as well.
  2. Replace the base type of your DataService. For EF 5 or below, your data service should inherit from DataService<T> where T is a DbContext or ObjectContext. For EF 6 or greater, your data service should inherit from EntityFrameworkDataService<T> where T is a DbContext. See What’s the difference between DataService and EntityFrameworkDataService below for more details.

Creating a new OData service with EF 6

If you are creating a new OData service and would like to use Entity Framework 6 or greater, you will need to follow similar steps:

  1. Create your new project. I typically use an ASP.NET Empty Web Application for this, but you can use whatever you want. Note that if you do use the empty template, you may need to create an App_Data folder for Entity Framework to work properly with LocalDB.
  2. Install the WCF Data Services Entity Framework Provider NuGet package. Since this package has a dependency on WCF Data Services 5.6.0 and Entity Framework 6 or greater, some of the other NuGet packages in your project may be upgraded as well.
  3. Add a new WCF Data Service. It’s best if you ensure that your tooling is up-to-date as we occasionally fix bugs in the item template. Our latest tooling installer was released with WCF DS 5.6.0. It can be downloaded here.
  4. Replace the base type of the DataService that was generated by the item template. For EF 6 or greater, your data service should inherit from EntityFrameworkDataService<T> where T is a DbContext. See What’s the difference between DataService and EntityFrameworkDataService below for more details.

What’s the difference between DataService<T> and EntityFrameworkDataService<T>?

Historically the WCF DS stack required all WCF DS-based OData services to inherit from DataService<T>. Internally, the data service would determine whether the service should use the in-box EF provider, the in-box Reflection provider, or a custom provider. When we added support for EF 6, we utilized the new public provider functionality to allow the provider to ship separately. This will allow us, for instance, to use WCF DS 5.6.0 with either EF 5, 6, or some future version. However, the new public provider functionality comes with a little bit of code you need to write. Since that code should be the same for every default EF 6 WCF DS provider, we went ahead and included a class that does this for you. EntityFrameworkDataService<T> inherits from DataService<T> and implements all the code you would need to implement otherwise. By shipping this additional class, we literally made the upgrade process as simple as changing the base type of your service.

Feedback please

We are heads down on getting our stacks updated to support OData v4, so we’ve had very limited resources to focus on testing this provider. We have a few automated tests and have tried a number of ad-hoc tests. That said, our coverage could be better so… we’re going to rely on you, our dear customer, to provide feedback of whether or not this provider works in your situation. If we don’t hear anything back, we’ll go ahead and release the provider in a week or so.

Thanks,
The OData Team

New version of OData Validator

$
0
0

The OData team has been working on updating the OData Validator tool to support the new JSON format validation. We are pleased to announce that the tool now supports validating your V3 service for all three formats – ATOM, old JSON format (aka JSON Verbose) and the new JSON format. We are also working on adding OData V4 service validation support. We will continue adding more validation rules over the next few months.

Please check out the tool here : http://services.odata.org/validation/ and provide us feedback.

About the tool

OData Validator is an OData protocol validation tool. We have gone through the OData spec and created a set of rules to validate against a given OData payload. You can point the tool to your service and choose what you want to validate. The tool will run the right set of rules against the returned payload and tell you which ones passed and which ones failed. The tool will also link you to the relevant spec section so you can open the spec to see what it says. The tool supports validating various OData payloads like service document, metadata document, feed, entity and error payloads.

New and improved EULA!

$
0
0

TL;DR: You can now (legally) use our .NET OData client and ODataLib on Android and iOS.

Backstory

For a while now we have been working with our legal team to improve the terms you agree to when you use one of our libraries (WCF Data Services, our OData client, or ODataLib). A year and a half ago, we announced that our EULA would include a redistribution clause. With the release of WCF Data Services 5.6.0, we introduced portable libraries for two primary reasons:

  1. Portable libraries reduce the amount of duplicate code and #ifdefs in our code base.
  2. Portable libraries increase our reach through third-party tooling like Xamarin (more on that later).

It took some work to get there, and we had to make some sacrifices along the way, but we are now focused exclusively on portable libraries for client-side code. Unfortunately, our EULA still contained a clause that prevented the redistributable code from being legally used on a platform other than Windows.

OData and Xamarin: Extending developer reach to many platforms

We are really excited about Microsoft’s new collaboration with Xamarin. As Soma says, this collaboration will allow .NET developers to broaden the reach of their applications and skills. This has long been the mantra of OData – a standardized ecosystem of services and consumers that enables consumers on any platform to easily consume services developed on any platform. This collaboration will make it much easier to write a shared code base that allows consumption of OData on Windows, Android or iOS.

EULA change

To fully enable this scenario, we needed to update our EULA. We, along with several other teams at Microsoft, are rolling out a new EULA today that has relaxed the distribution requirements. Most importantly, we removed the clause that prevented redistributable code from being used on Android and iOS.

The new EULA is effective immediately for all of our NuGet packages. This means that (even though we already released 5.6.0) you can create a Xamarin project today, take a new dependency on our OData client, and legally run that application on any platform you wish.

Thanks

As always, we really appreciate your feedback. It frequently takes us some time to react, but the credit for this change is due entirely to customer feedback. We hear you. Keep it coming.

Thanks,
The OData Team

odatacomponenteula.zip

Viewing all 20 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>