Nov 242008
 

SondreB asks

I’m working on a scenario where I need to process mesh data at intervals. My concern is how I can do delegated authorization in a secure way. How and where do I store the user credentials (or their auth token) in my Windows Azure service? Is there a way of doing interval based processing in the cloud other than Windows Azure (excluding offerings from other third parties)?

You have three approaches to process data in a user’s Live Mesh at regular intervals: 

  • An application that runs on the client machine
  • A service that runs on a traditional server
  • A service that runs in the cloud.

Client Application 

A client application running on the local machine can access the currently logged-in user’s mesh data.  Your app could be written as a desktop .NET Windows application, using the Live Framework SDK to connect to the user’s mesh.  Your desktop app will need to present valid authentication credentials to the Live Framework APIs; to avoid having to store and protect the user’s credentials, use the LiveID SDK.  If the user checks “remember me” and “remember my password”, your app can login on the user’s behalf using the LiveID API’s cached credentials.

You could implement your client app as a mesh-enabled web application, written in HTML+JavaScript or Silverlight.  User authentication is automatic in this situation, since the user must be logged in before they can run your app.  If the user has installed the Live Framework Client on their local machine, they can run your app on their local desktop just as they would a normal desktop app.  When your app is running on the local desktop, it will default to talking to the local cache of mesh data which is synch’d frequently with the cloud when the machine is connected to the network.

These client-side approaches may be sufficient for simple scenarios.  Client side code is convenient to write and doesn’t require allocation or management of server resources, but client side code is at the mercy of the local machine and network connection.  If your users tend to leave their machines on most of the time, this might be good enough, but if you need true round-the-clock processing you need a server side solution.

Traditional Server Application

You could implement mesh data processing in a traditional web server architecture, such as IIS with ASP.NET or Apache with PHP, Perl, or Python scripting.  This will require delegated authorization, and delegated auth requires that your server have a stable domain name.

Delegated authorization requires that the end user approve or opt-into allowing your application to access (parts of) their mesh data.  This approval must take place using a special Microsoft branded web page, so that the user will know that they are telling Microsoft that it’s ok for you to access their data. 

The flow typically looks something like this:

  1. Your web page explains what data you want access to, and what you will do with it, and then forwards the user to a Windows Live authorization page
  2. LiveID login will interject if the user is not already logged in, then forward to the intended page
  3. The user reads the information on the Windows Live web page and chooses to allow or deny access to your site (your domain).
  4. The user’s response forwards the user back to a landing page on your web site.  If the user allowed access, this response will contain an authorization token.
  5. You store the authorization token in your system, usually paired with the user’s name within your system so you will know which auth token to use to access this user’s data in the future. 

How you store this auth token is up to you, but it should be under physical and network access protection.  At a minimum, you should encrypt whatever table or file you store the auth token in and restrict access to the file to only the administrators and web services that need to access it. You should also avoid transmitting the auth token across the network except when needed, and only via a secure SSL connection.

The auth token cryptographically combines your domain name with the user’s LiveID username.  The auth token can only be used on requests made from your domain name, can only access that particular user’s data, and can only access the kinds of data the user gave you access to.  The user can revoke your access (invalidate the auth token) at any time using the Windows Live web site.

Once you have the user’s authorization to access their data with the auth token, accessing their mesh data is fairly straightforward.  You can use the Live Framework SDK to make data requests of the user’s mesh data from within an ASP.NET server app, or you can make plain old REST style HTPPS requests to the Live Services cloud using your favorite server scripting tools.  You add the delegated authorization token for the particular user in a header to outgoing requests.

And finally, you can schedule your service to check the user’s mesh data on regular intervals, using cron or AT or whatever scheduling tool is appropriate.

Cloud Service

You can implement your mesh-checking logic in a hosted cloud environment, such as Windows Azure, Amazon EC2, GoogleApp, or a variety of other hosting providers.  You will still need to use delegated authorization as with the traditional server scenario and walk through the same steps to obtain the user’s permission and be issued an auth token. The code you write to access the user’s mesh data using the auth token is pretty much the same as what you would write in the traditional server scenario.

Data security is a core requirement of every app, but especially of cloud hosted applications.  I think you’ll find data security is a core component of Windows Azure cloud data services. You can store the username + auth token in a table in Azure’s cloud data service.

Scalability and cost management are where running as a cloud service has a distinct advantage over the traditional server scenario.  Scalability is easy: having the option to fire up additional instances of your cloud service by the tens or thousands as needed to meet demand is a key aspect of large scale cloud hosting that traditional servers in the closet or co-lo’s can’t begin to touch.

If your application needs to be checking the user’s mesh data every 5 minutes all day every day, the operational differences between cloud and traditional server will be relatively small, and perhaps offer no cost savings at all. 

If your application needs to run only during business hours or only on certain days of the week, you can do something with a cloud service that you can’t do with a traditional server, hosted VM or co-lo:  You can turn the cloud service off when you don’t need it, and stop paying for it until you turn it on again.

Anticipating a holiday rush?  You can scale up your customer capacity to massive volume during the holiday shopping crunch, then get rid of it in January and coast by on a shoestring ops budget until the next holiday surge.

If your mesh-checking app needs to check for changes in the user’s mesh data every 5 minutes, your service will be running pretty much non-stop.  If you only need to check 4 times a day, though, you might just turn your cloud service off until the next update interval.

When designing your mesh-checking architecture, be sure to take a close look at the notification features offered by Live Frameworks.  Notifications and data sync metadata could significantly reduce the frequency and amount of work your app needs to do to detect changes.

  6 Responses to “Batch Processing Live Mesh Data with Windows Azure or Live Framework”

  1. Great info here Danny! Where can I find more info on running my Mesh-enabled app from my local desktop?

    I have the local client MOE running (can browse the resources in it using the LiveFxResourceBrowser), but dont see anywhere where I can run my app locally. I have an icon on my Windows desktop for the app, but that runs the (online) live desktop version (prompts me to login to Live Mesh to get it running).

    Appreciate in advance any additional info here.

    Thanks!

  2. Hi Dan,

    You do need to log into the local client before you can run a mesh app from the local desktop. In the current build, this login step may still require an internet connection to complete.

    To run mesh apps on the local desktop while commpletely offline, you need to configure your local client to cache your login credentials. While online, login to your local client and check the “remember me” and “remember my password” checkboxes. After that, you can go offline. When you launch the mesh app from the local desktop shortcut and you have not logged into the local client, it will prompt you to log in but your login credentidals will already be filled in and no round trip to a LiveID auth server is required.

  3. Thanks Danny – got the offline client working. I could have sworn when I was first running it from the Windows desktop icon, it was running the online version (in a browser) when I was online, and running the local version (in the MeshAppHost.exe frame) when I was offline.

    Now today the Windows desktop icon is always running the local version in the app host.

    Was I dreaming that before? Is that the standard behavior — that the Windows desktop client should always run the local version (in the Silverlight code I see that LiveOperatingEnvironment.IsLocalConnection is true always).

    Thanks a lot!

  4. Hi Dan,

    Yes, the shortcuts on your local desktop always run the mesh app in the local client.

  5. Thanks for the great response Danny, I like the in depth analysis and details you give in the examples.

Sorry, the comment form is closed at this time.