Today I attended a web seminar from Microsoft entitled “Pragmatic Patterns for Architects – Patterns for moving to the Cloud”, which was actually an introduction to approaches deploying applications into a cloud environment and related patterns. Here, for your enjoyment or argument, is a quick summary of that seminar.
There weren’t any particularly new cloud approaches/patterns mentioned in this seminar, but I really liked some of their visual models they used to describe the patterns and identifying key architectural issues. Azure was mentioned and referenced, but the seminar was largely agnostic about what cloud platform is being used.
The seminar covered 5 different "patterns" (really approaches) to bringing applications to the cloud. An underlying statement (which I disagree with) is that the cloud is only just about moving applications, not about special “cloud” applications. However, for the scenarios they described, the 5 patterns they talked about provided a good introduction to thinking about what it takes to move applications into the cloud.
The 5 patterns they described are:
- Transference – Moving applications or services from in-premise operation to cloud operation for economic, consolidation, or prototyping purposes. This really is the simplest of the patterns, but only works when applications are generally stand-alone, with no major integration to on-premise services.
- Scale & Multi-Tenancy – For this they described applications that are subjected to possible rapid growth (e.g. viral web site) that cannot be predicted. They indicated the primary driver for this approach is the economics of avoiding over capacity.
- Burst Computing – This approach is also driven by the economics of avoiding over capacity.
- Elastic Storage – The example they used for this approach was storing/sharing MRI images in healthcare, where data growth could occur exponentially. They described the economics for this being driven by storage management costs, not storage technology costs.
- Inter-organization Communication – The example they used to describe this approach was a clinical trial applications, requiring a large and changing number of organizations to use the application. The primary driver they mentioned for this was the economics of infrastructure management.
The presenters did point out that these were only a “starting set of patterns”, not an exhaustive list.
What I liked in the presentation is the simple visual model they used to describe how the patterns are realized. It’s a almost typical stack model, only that model is used to indicate which components (and when) operate on-premise, in a hosted environment, or on the cloud.
One of their early examples was describing how using Security Token Services to provide brokered authentication services between a hosted app and an on-premise Active Directory server. (yes, the first example was not a cloud based application, just a service) Not surprisingly, the example revolved around hosted Exchange:
What was interesting here was their use of lines, color, callouts, and color animation to provide a clearer example of how this application operates and integrates across layers and across infrastructure/services providers. In the image above, the red lines/outlines were part of a visual sequence of how authentication (or auth rejection) is accomplished in this model.
Their ‘takeaway’ for the Transference pattern was to watch for dependencies across different providers. In some cases (with legacy apps, I suspect many cases), the level of refitting or customization of an existing app may be too large to justify a move to the cloud. However, I suspect this could be an opportunity to for CSC to build/provide services that make it easier to move legacy applications to the cloud while maintaining the illusion of direct integration to any remaining on-premise or hosted systems/components.
They made a point to describe this as the “opposite to the pattern of predicting the growth of a web site”. The walked through an example (see diagram below) where the number of servers being allocated were easily configured (up or down) in the Azure app configuration file. (They did not provide any example of how this scaling could be done dynamically based on load.)
They did point out a key aspect of cloud that doesn’t get mentioned enough (except by Mark Masterson): You can turn off what you don’t need. Deprovisioning in the cloud is much easier and cost effective than trying to deal with unused hardware.
For their Burst Computing pattern example, they got more Azure specific, where Azure supports the notion of "Worker Roles" for processing (ie batch or background processing). Users would never interact directly with these worker roles, but rather use the presentation (‘web role’) components to send/receive data. The data or commands would then be sent to/from the worker roles via tables or queues. Here’s the diagram they use to describe this setup, using a sample application “PrimeSolvr”.
I found it strange that they didn’t use the popular term “cloudbursting”. Someone have a trademark on that already?
With Elastic Storage, the described the common problems of file/DB storage & limits of on-premise storage growth, including the issue of server affinity of data. The examples they dived into were more about relatively independent data objects, such as blobs and simple tables. This reminded me of Greenblatt’s idea o
f Moby Address Space that came out many years ago (in my old Lisp Machine/Symbolics days)
In their "MRI Image" storage example, the described that the application (primarily front-ends) would need to take into consideration the access model ("code near" vs "code far" models– e.g. how to chunk the data appropriately for faster response and to avoid timeouts.)
They did allude to the availability of relational data structures in Azure through TDS – Tabular Data Streams; something I’ll have to look at more closely later.
The last pattern focused on applications that have a large or unpredictable number of organizations using & collaborating with a complex application. Here they described an example (more of a pub/sub model) that uses the .NET Service Bus to route messages coming in from different orgs. They mentioned that this still required a polling model for notifying receivers. There doesn’t seem to be a formal ‘push’ mechanism with Azure yet (but I’m just guessing as I haven’t dived into Azure yet.)
In describing the workflow support in Azure, the presenters described the possibilities of hybrid models, where components of an application can be hosted across all three major infrastructure provider types.
The web seminar was hosted/presented by Microsoft’s Strategy & Architecture Council, part of Microsoft’s MSDN Architecture Center.
As this was an introductory seminar, they didn’t go into the many aspects/issues still being worked out with cloud computing (e.g. security, compliance, etc.) Still, it was a useful seminar (at least for me), particularly since it gave me some great ideas on how to visualize ‘cloud app’ architectures. Unless you’re a guru already with Cloud systems, I recommend this seminar, the recording of which should be online soon (go to the SAC blog to find out when.)