I’m working on a scenario where I need to process mesh data at intervals. My concern is how I can do delegated authorization in a secure way. How and where do I store the user credentials (or their auth token) in my Windows Azure service? Is there a way of doing interval based processing in the cloud other than Windows Azure (excluding offerings from other third parties)?
You have three approaches to process data in a user’s Live Mesh at regular intervals:
- An application that runs on the client machine
- A service that runs on a traditional server
- A service that runs in the cloud.
A client application running on the local machine can access the currently logged-in user’s mesh data. Your app could be written as a desktop .NET Windows application, using the Live Framework SDK to connect to the user’s mesh. Your desktop app will need to present valid authentication credentials to the Live Framework APIs; to avoid having to store and protect the user’s credentials, use the LiveID SDK. If the user checks “remember me” and “remember my password”, your app can login on the user’s behalf using the LiveID API’s cached credentials.
These client-side approaches may be sufficient for simple scenarios. Client side code is convenient to write and doesn’t require allocation or management of server resources, but client side code is at the mercy of the local machine and network connection. If your users tend to leave their machines on most of the time, this might be good enough, but if you need true round-the-clock processing you need a server side solution.
Traditional Server Application
You could implement mesh data processing in a traditional web server architecture, such as IIS with ASP.NET or Apache with PHP, Perl, or Python scripting. This will require delegated authorization, and delegated auth requires that your server have a stable domain name.
Delegated authorization requires that the end user approve or opt-into allowing your application to access (parts of) their mesh data. This approval must take place using a special Microsoft branded web page, so that the user will know that they are telling Microsoft that it’s ok for you to access their data.
The flow typically looks something like this:
- Your web page explains what data you want access to, and what you will do with it, and then forwards the user to a Windows Live authorization page
- LiveID login will interject if the user is not already logged in, then forward to the intended page
- The user reads the information on the Windows Live web page and chooses to allow or deny access to your site (your domain).
- The user’s response forwards the user back to a landing page on your web site. If the user allowed access, this response will contain an authorization token.
- You store the authorization token in your system, usually paired with the user’s name within your system so you will know which auth token to use to access this user’s data in the future.
How you store this auth token is up to you, but it should be under physical and network access protection. At a minimum, you should encrypt whatever table or file you store the auth token in and restrict access to the file to only the administrators and web services that need to access it. You should also avoid transmitting the auth token across the network except when needed, and only via a secure SSL connection.
The auth token cryptographically combines your domain name with the user’s LiveID username. The auth token can only be used on requests made from your domain name, can only access that particular user’s data, and can only access the kinds of data the user gave you access to. The user can revoke your access (invalidate the auth token) at any time using the Windows Live web site.
Once you have the user’s authorization to access their data with the auth token, accessing their mesh data is fairly straightforward. You can use the Live Framework SDK to make data requests of the user’s mesh data from within an ASP.NET server app, or you can make plain old REST style HTPPS requests to the Live Services cloud using your favorite server scripting tools. You add the delegated authorization token for the particular user in a header to outgoing requests.
And finally, you can schedule your service to check the user’s mesh data on regular intervals, using cron or AT or whatever scheduling tool is appropriate.
You can implement your mesh-checking logic in a hosted cloud environment, such as Windows Azure, Amazon EC2, GoogleApp, or a variety of other hosting providers. You will still need to use delegated authorization as with the traditional server scenario and walk through the same steps to obtain the user’s permission and be issued an auth token. The code you write to access the user’s mesh data using the auth token is pretty much the same as what you would write in the traditional server scenario.
Data security is a core requirement of every app, but especially of cloud hosted applications. I think you’ll find data security is a core component of Windows Azure cloud data services. You can store the username + auth token in a table in Azure’s cloud data service.
Scalability and cost management are where running as a cloud service has a distinct advantage over the traditional server scenario. Scalability is easy: having the option to fire up additional instances of your cloud service by the tens or thousands as needed to meet demand is a key aspect of large scale cloud hosting that traditional servers in the closet or co-lo’s can’t begin to touch.
If your application needs to be checking the user’s mesh data every 5 minutes all day every day, the operational differences between cloud and traditional server will be relatively small, and perhaps offer no cost savings at all.
If your application needs to run only during business hours or only on certain days of the week, you can do something with a cloud service that you can’t do with a traditional server, hosted VM or co-lo: You can turn the cloud service off when you don’t need it, and stop paying for it until you turn it on again.
Anticipating a holiday rush? You can scale up your customer capacity to massive volume during the holiday shopping crunch, then get rid of it in January and coast by on a shoestring ops budget until the next holiday surge.
If your mesh-checking app needs to check for changes in the user’s mesh data every 5 minutes, your service will be running pretty much non-stop. If you only need to check 4 times a day, though, you might just turn your cloud service off until the next update interval.
When designing your mesh-checking architecture, be sure to take a close look at the notification features offered by Live Frameworks. Notifications and data sync metadata could significantly reduce the frequency and amount of work your app needs to do to detect changes.