Sunday, October 24, 2010

Windows Azure and Cloud Computing Posts for 10/22/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H_thumb31133  
Updated 10/24/2010 with new articles marked .

•   Updated 10/23/2010 with new articles marked .

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.


Tip: If you encounter articles from MSDN or TechNet blogs that are missing screen shots or other images, click the empty frame to generate an HTTP 404 (Not Found) error, and then click the back button to load the image.

Azure Blob, Drive, Table and Queue Services

imageSee the David Pallman reported Azure Storage Explorer 4 Beta Now Available (with source code) on 10/23/2010 article in the Live Windows Azure Apps, APIs, Tools and Test Harnesses section below.


<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

Alex James (@adjames) tweeted on 10/24/2010:

imageLooks like @logan_barnett is going to create a python provider of #OData. That would make it .NET, Java, Ruby and next Python...

Good news!


Zoiner Tejada subtitled his Hosting WCF Services on Azure 101 tutorial of 10/22/2010 “Leveraging Web and Worker Roles for your WCF Services”:

image Historically in this column we’ve focused on designing, building, and running WCF and Workflow Services on premises. This time, let's examine how you can leverage your existing experience in WCF by building services that run on Windows Azure. We will approach this by walking through the development lifecycle of a simple WCF service that runs within Windows Azure. The purpose is twofold: encouraging you to get your feet wet, but also pointing out some of the hidden gotchas that can waste hours of your time even in the simplest scenarios.

Windows Azure from Space

imageIf you’ve had any exposure to Windows Azure, you’re probably already familiar with its two major components: Windows Azure Storage (which provides table, queue, and blob storage) and Windows Azure Compute (which is where your code runs within Azure). In this article we will focus exclusively on Windows Azure Compute, particularly as it applies to web services.

Window Azure Compute consists of two components that are cloud equivalents of what you are familiar with in the on-premises world. Windows Azure Web Roles are akin to ASP.NET websites hosted in IIS. Windows Azure Worker Roles are the equivalent of a Windows Service.

Which Role for Services?

If these are your two options for hosting, where should you put web services? Are Web Roles, as the name implies, simply for websites and Worker Roles the place for services? You can think of Worker Roles as a process that will self-host your WCF ServiceHost, in much the same way that a Windows Service or console application might. Also, with IIS 7 you’re able to host ASP.NET web pages alongside WCF Services (with SVC endpoints) or Workflow Services (with XAMLX endpoints).

In practice, the answer to which role to use really boils down to the protocols you require for communication between Internet clients and your WCF service.  For services which will rely on HTTP or HTTPS, you will want to host your service within a Web Role. TCP services, on the other hand, should be hosted in a Worker Process. This gotcha can be quite hard to debug as there is no obvious exception that occurs, for example, when you attempt to host an HTTP-based service within a Worker Role. Moreover, the lack of documentation on this and a plethora of conflicting blogs increase this confusion. Also, note that currently there is no support for XAMLX services (or WF 4.0 in general) as of this writing in either role.

Both roles support the notion of endpoints that are available for communicating with Internet clients (known as input endpoints) as well as for communication between services and services in different roles (known as internal endpoints). Odds are your first services will need to communicate with Internet clients, so we focus on leveraging input endpoints in this article.

Let’s start with what our service looks like, as we might have defined for use on-premises. The ListService, as Figure 1 shows, consists of a single operation GetItems that takes a quantity of items desired, generates a list of strings representing those items, and returns the List. We built this by creating a WCF Service Library project, defining both the interfaces (IListService) and the service itself (ListService). In the on-premises case we could reference this type from an IIS hosted SVC file’s ServiceHost tag, or within the code which instantiates a ServiceHost instance within a Windows Service. In moving the service to Windows Azure Web Role, the process is similar to the former and the latter for a Worker Role. …

Read more: 2, Next, Last


• Channel 9 presented a 00:51:20 Sells, Laverty and Flasko: Entity Framework 4, oData and PDC10 Webcast on 10/22/2010:

imageChris Sells takes us on a small tour (two offices and a few hallways) in one of the Data Framework and Modeling Group's buildings (interesting - the same building the C9 team launched C9 from 6+ years ago: Building 18). The goal here is to get a sense of what's new in Entity Framework 4 and oData. We do this the old fashioned way: impromptu conversation with some members of the teams who make this stuff. Chris is a great MC and I suspect you will be happy with what you learn and who you meet here.

imageCode-First in EF4 is very cool -> you build your DB and data model in code...first. Sit back, relax and enjoy the ride. We begin the journey in a small office that Chris does not work in. Then we drop by Tim Laverty's office to talk turkey about EF4 and Code First. Then, we amble over to Mike Flasko's office to interrogate him on oData (what's the big deal, anyway - what is it, exactly, is it open or closed? Etc.., etc...)

Enjoy.

Links mentioned in the conversations:

For EF:

For oData:


Steve Yi announced Service Update 5 Released to Production in a 10/22/2010 post to the SQL Azure Team blog:

image Service Update 5 for SQL Azure is now live in all datacenters worldwide. This release primarily focused on delivering internal operational improvements to enable future feature additions. As part of this update, SQL Azure now provides support for the sp_tableoption system stored procedure. For more information, see sp_tableoption (SQL Azure Database).

imageSeveral improvements and updates were also made to the MSDN documentation. These include:


Microsoft TechEd Europe 2010 made its OData API for Session Data feed available on 10/22/2010:

image Not happy browsing our list of sessions on the web, feel like doing some data mining of your own, building an app to show how schedule planning should be done? Well, if any of those statements apply to you, then we have the data you need.

imageThe Open Data Protocol, referred to as OData, is a new data-sharing standard that breaks down silos and fosters an interoperative ecosystem for data consumers (clients) and producers (services) that is far more powerful than currently possible. It enables more applications to make sense of a broader set of data, and helps every data service and client add value to the whole ecosystem. WCF Data Services (previously known as ADO.NET Data Services), then, was the first Microsoft technology to support the Open Data Protocol in Visual Studio 2008 SP1. It provides developers with client libraries for .NET, Silverlight, AJAX, PHP and Java. Microsoft now also supports OData in SQL Server 2008 R2, Windows Azure Storage, Excel 2010 (through PowerPivot), and SharePoint 2010.

The URL for the Tech·Ed Europe OData service is http://odata.msteched.com/teeu10/sessions.svc/, and you can find more information on how to access this data on http://www.odata.org or on one of the many blog posts around the web about exposing and consuming data as OData such as this one (How to navigate an OData compliant service).

Here’s the root document:

image

and the first member of the Sessions collection:

image

The Cloud Computing & Online Services track lists 82 items.


Azret Botash will present an implementing Custom OData Providers Webinar on 10/26/2010 from 10:00 AM to 11:00AM PDT:

imageIn this session we will analize WCF Data Services in detail and learn how to create custom providers.


<Return to section navigation list> 

AppFabric: Access Control and Service Bus

CardSpaceBlog posted AD FS 2.0 Step-by-Step Guide: Federation with Shibboleth 2 and the InCommon Federation to the Claims-Based Identity blog on 10/22/2010:

image7223We have published a step-by-step guide on how to configure AD FS 2.0 and Shibboleth to federate using the SAML 2.0 protocol.  There is also an appendix on federating with the InCommon Federation.  You can view the guide in docx format and soon also as a web page.  This is the third in a series of these guides; the guides are also available on the AD FS 2.0 Step-by-Step and How-To Guides page.


Vittorio Bertocci (@vibronet) reported that he’s Speaking @… PDC!!!!!!!!!!!!!! on 10/21/2010:

image Yesterday we released the list of sessions for PDC2010. I am dazzled to be finally be able to say it: I have the honor to be the speaker for the identity breakout session!

playerdistorted

I had the pleasure to present at PDC09 as well, and speaking at PDC is always a Big Deal; but if you know how PDC10 is going to work this year, you know this is a Huge Deal. Some key points:

  • All breakout sessions are going to be broadcasted *LIVE* (as in real time) and in HD
  • There are fewer breakouts: if you go in the Schedule tab of the player, you’ll see that there is a very focused selection of sessions
  • The sessions will be close-captioned and with audio tracks instantly translated in multiple languages
  • There are many viewing parties all around the planet, where people will gather to follow the event online
  • The player application will allow the audience to interact, for example by submitting questions

imageAll of the above means that the potential audience is just GARGANTUAN. For an evangelist, this is the ultimate reach tool. I had a real time streaming speaking experience when I gave the keynote of the Italian Visual Studio launch back in Spring: I imagine that will be similar, but on a much bigger scale, with a talk that is dramatically more technical… and probably without makeup :-) ah, and for once I may even not have jetlag, since it all takes place at about an 8 minutes drive from my house.

I can’t tell you much about the talk in itself, of course, or people here will come after my mane; hence, I will just paste the customary title+abstract here. In fact, I’ll also add the time: for once, that will be actionable also if you are not on the conference floor! The session will start at 10:15am PST, which would be 1:15pm in New York, 18:15 in London, 19:15 in Paris and a casa in Italia, 21:15 in Moscow, 10:45pm in India, 1:15am in 北京 and Singapore, 4:15am in Sydney and 6:15am in Auckland. Wow, I guess Earth is round after all ;-)

Identity & Access Control in the Cloud

10:15 AM-11:15 AM GMT-7

image7223Signing users in and granting them access is a core function of almost every cloud-based application. In this session we will show you how to simplify your user experience by enabling users to sign in with an existing account such as a Windows Live ID, Google, Yahoo, Facebook, or on-premises Active Directory account, implement access control, and make secure connections between applications. You will learn how the AppFabric Access Control Service, Windows Identity Foundation, and Active Directory Federation Services use a claims-based identity architecture to help you to take advantage of the shift toward the cloud while still fully leveraging your on-premises investments.

I can’t tell you how honored I am to represent in this session the good work of so many great engineers. The product teams for WIF, ACS and ADFS did (and are doing!) a phenomenal job in bringing to the market technologies that didn’t even exist as of just a couple of years ago: one hour is not enough for doing justice to that, but I’ll do my best :-)

See you next week… in Redmond, or in your browser!

See my Windows Azure, SQL Azure, AppFabric and OData Sessions at PDC 2010 post updated 10/22/2010 for a complete list of PDC10 sessions on a single page.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

•• Liam Cavanagh reported Improved Sync Framework 2.1 Documentation Released on 10/22/2010:

http://j.mp/9eWpJq

We have improved the Sync Framework 2.1 documentation. This update includes new and better information in the following database synchronization topics:

  • Provisioning
  • Scopes
  • Intercepting and changing data during synchronization
  • Improvements to walkthroughs
  • Deploying to Windows Azure

We've also reorganized and rewritten many topics to help you more easily find and understand what you need to know to use Sync Framework.

Go here to read the improved documentation: http://j.mp/9eWpJq

Don't have Sync Framework 2.1? Download it today: http://j.mp/a3t6rJ


•• Peter Bright wrote a detailed 18-page Windows Phone 7: The Ars Review article on 10/22/2010:

image The smartphone market ain't what it used to be. Four years ago, Symbian ruled the world—it was totally dominant in every market but three: Japan and China both had strong showings from Linux, and the North American market was split roughly evenly between RIM, Microsoft, and PalmSource. Worldwide, smartphone sales amounted to some 60 to 65 million.

Then Apple came along with the iPhone in 2007 and changed the world.

image The iPhone did four things. It showed us what could be done with finger-based user interfaces—that they could be easy to use, easy to type on, flexible, and good-looking. It made smartphones mass-market, consumer-oriented gadgets, breaking them free of their corporate shackles. It showed that smartphones were viable web browsing platforms, just as long as they were equipped with a good browser. And, eventually, it showed that there was a lot of value to be had in integrating an online application store.

Windows Mobile was a solid performer in the old smartphone world, but it never moved into the new, post-iPhone smartphone world. Windows Mobile 6.5, released in May 2009, was a half-hearted attempt to bring the system up-to-date with a finger-friendly home screen and Start menu-type-thing, but the interface was crudely grafted on and plainly unsatisfactory. This wasn't finger-friendly, consumer-friendly, modern smartphone software, and everyone knew it. It didn't halt Windows Mobile's marketshare slide, much less turn it around.

If Microsoft wanted to remain a player in the smartphone market, something would have to change. Windows Phone 7 is that change.

imageWindows Phone 7 is a smartphone platform that's aimed first and foremost at consumers. It's designed from the ground up for a finger-driven interface. It's built to be clean, attractive, and consistent. The ambition is that it will finally give Microsoft a platform that will enable it to take on the iPhone and Android phones. Virtually everything that Windows Mobile did is now ancient history. Windows Phone 7 ushers in a new era of Microsoft-powered smartphones.

Hardware

In many ways the hardware is the biggest similarity between Windows Mobile and Windows Phone, because with the new operating system, just as with the old one, Microsoft is leaving the hardware to third parties. Unlike with Windows Mobile, however, the company is being extremely strict about what's allowed and what's not. Every Windows Phone 7 device must meet the minimum specification.

 

Processor 1 GHz Qualcomm Snapdragon
GPU Qualcomm Adreno 200
RAM 256 MB
Flash storage 8 GB
Screen resolution 800 × 480 (exactly)
Touch Capacitive multitouch with at least four contact points
Cellular connectivity GSM/GPRS/EDGE/UMTS/HSPA (HSPA+ optional)
Wireless 802.11b/g (802.11n optional), Bluetooth 2.1 + EDR, FM radio
Hardware buttons Start, search, back, volume, power, and camera (with half-press focus)
Camera 5 MP, dedicated flash
Sensors
A-GPS, accelerometer, compass, proximity, light
Ports Micro-USB, 3.5mm TRS headphone jack
Miscellaneous options
Hardware keyboard, user-accessible microSD slots

This is a high specification; these are premium handsets, so they'll be priced towards the upper end of the spectrum. When Windows Phone 7 was first announced, Microsoft said that at some point after launch, lower resolution 480×320 devices would also be supported, and even further into the future, an as-yet unspecified third resolution/form-factor would also be added.

At the moment, however, all the focus is on the 800×480 models, and personally, I think Microsoft should stick with this for as long as possible before venturing into new designs. The iPhone has demonstrated that you don't need a billion different models to be successful, and by sticking with one resolution, the job for application developers is made a great deal easier.

Even within these constraints, the initial handset partners—Dell, HTC, Samsung, and LG—have a reasonably broad range of options, with screens ranging from 3.5" to 4.3", 8 or 16GB of storage, and one with an 8MP camera. Dell's phone, the Venue Pro, includes a vertical (potrait) slider keyboard, and next year Sprint will release an HTC device, the 7 Pro, which will include a more conventional horizontal (landscape) slider keyboard. We took a quick look at the launch models last week, and you can see our initial thoughts on the UK and US offerings.

Some models appear also to have forward-facing cameras. Windows Phone 7 doesn't presently support video calling, which is unfortunate for those of us in parts of the world where such things have been a feature of the telephony landscape for many years. If this is indeed the case, it may be an indication that video calling is coming sooner rather than later.

The most unique, Windows Phone 7-specific feature of the hardware is the hardware buttons. Although the user interface is predominantly touch-driven, the specification mandates a set of hardware buttons. The power, volume, and camera buttons are self-explanatory; it's the Start, search, and back buttons that will be the hallmark of Windows Phone 7 devices. These mandatory buttons are perhaps the biggest reason why, to the chagrin of many, devices that otherwise ought to live up to the Windows Phone 7 specification such as the HTC HD2, won't be upgradable, and will be lumbered with Windows Mobile 6.5 for the rest of their lives.

The button placement is also defined by Microsoft. The back, Start, and search buttons must be on the front of the phone and in that order (though they can be mechanical or capacitive, or some combination of the two). The volume rocker switch must be on the top of the left-hand side, the power button on the top of the right-hand side, and the camera button on the bottom of the right-hand side.

The 3.5mm headphone jack must also support three buttons, volume up, volume down, and a third to answer calls/initiate voice dialling.

Notably missing from the feature list is support for CDMA and EVDO. CDMA support will arrive next year; at the moment, Windows Phone 7 handsets are GSM-only.

The specification allows for minor variations, but it doesn't allow for any radical deviations. The result is that the handsets are far more similar than they are different, and unless Microsoft substantially liberalizes the rules, it looks like it will be difficult for OEMs to produce any truly exceptional or unusual devices. This is good for application developers, as they have fewer targets to aim for, and it's arguably good for consumers, as it means that they can buy a Windows Phone 7 phone with confidence—if you know your way around one Windows Phone 7 phone, you know your way around them all.

It may, however, be bad for the OEMs, who may find themselves with little ability to differentiate and distinguish themselves from each other. OEMs are allowed to include custom applications, but their ability to stamp their own branding onto phones will be far weaker than it is with Android and was with Windows Mobile. If Windows Phone 7 is anything short of an enormous success, it's easy to see them giving up on the platform.

The model I have is a Samsung Omnia 7, and I'm using Orange, in the UK. The most striking feature of the Omnia 7 is its screen; it's a frankly beautiful 4" Super AMOLED display, whose vibrancy and viewing angles are quite delightful. The version I have has 8 GB of internal storage, though a 16 GB version should also be available. …

Read More: Page 2, Next >

This must be the ultimate third-party Windows Phone 7 review.


•• David Pallman reported Azure Storage Explorer 4 Beta Now Available (with source code) on 10/23/2010:

imageI'm pleased to announce the public beta of Azure Storage Explorer version 4 is now available. Beta 1 can be downloaded from CodePlex.

image Azure Storage Explorer allows you to view and edit all 3 types of cloud storage: blobs, queues, and tables. If you're not already familiar with it, Azure Storage Explorer was one of the first (if not the first) GUI tools for viewing and working with Windows Azure storage. This utility was written in the very early days of Windows Azure, and even after 3 major versions all of that pre-dated the commercial release of Windows Azure in early 2010. Altogether there have been over 13,000 downloads.

It's been a year since version 3 was published, and in that time the Windows Azure platform has moved forward at a rapid pace. Many users have been hungry for an update that supports newer features such as blob root containers and better handles nuances such as blob prefix paths and property editing.

Highlights of version 4
Better Code. Versions 1-3 of Azure Storage Explorer didn't have the .NET Storage Client library to use and were based on an SDK sample that had a voluminous number of classes, leading to code that was vast and complex. In version 4 we are using the .NET StorageClient library and the code is compact and well-organized. The source code is open and is part of the CodePlex project.
Newer storage feature support. Support has been added for newer features such as blob root containers, blob path prefixes, and page blobs.
Copy and rename containers, queues, and tables.
Direct data entry and editing of blobs, messages, and entities.
Improved UI. The new WPF-based UI is cleaner, and supports opening multiple storage acounts at the same in tab views. The Mode-View-ViewModel pattern is used.
Containers & Blobs
• Create, View, Copy, Rename, Delete Containers
• Create, View, Copy, Rename, Delete, Upload, Download Blobs


Blobs can be viewed as images, video, or text. Blob properties can be viewed/editied.

Queues & Messages
• Create, View, Copy, Rename, Delete Queues
• Create, View, Pop, Clear, Upload, Download Messages

Message content and properties can be viewed.

Tables & Entities
• Create, View, Copy, Rename, Delete Tables
• Create, Edit, Copy, Rename, Delete, Upload, Download Entities

Entities can be viewed and edited.

Remember, it's a beta
With any beta software, you should exercise caution. Keep in mind that both people and programs can make mistakes, and it's always a good idea to keep safe backups of your data.
Azure Storage Explorer is a community donation of Neudesic. As with previous versions, Azure Storage Explorer remains free. Full source code is on CodePlex and we invite the community to help us keep it up to date and make improvements.


•• Kevin Kell explained Using C++ in an Azure Worker Role for a Compute Intensive Task in this 10/22/2010 post to the Learning Tree blog:

image Recently a situation came up with an application I was working on. My client had implemented a proprietary algorithm in a legacy desktop application. This algorithm is very compute intensive. It happened that it was an actuarial calculation, but the specifics don’t really matter. The computation was written in highly optimized C code.

imageIn thinking about how to port this application to Azure it was clear to me that I did not want to re-write all that code! In fact that was not even an option. The client wanted to be sure that that specific code (which they knew and trusted) was what was executing. The solution I chose was to host the application natively in a worker role. That way I felt I could pretty much drop in the existing code, run it through the C++ compiler and linker and party on from within Visual Studio.

This post highlights some of the considerations that exist when implementing a worker role that hosts an application in this way. For my example here I have replaced the actuarial calculation with another intensive task: calculate pi to an arbitrarily large number of decimal places. For that algorithm I adapted some nifty C code posted on Planet Source Code. I also borrowed liberally from a project I found in the MSDN Code Gallery. Both of these are excellent resources.

The first thing I did was to create a new C++ project in Visual Studio. Then I dropped the algorithm code into a new source file. After verifying that the code compiled properly and functioned as expected I was ready to add a new cloud service project to the solution.

Since the C++ application (in this case it is called “MyPI”) is to be hosted in the worker role I added a link to the executable in the worker role project items. Setting the “Copy to Output Directory” property to “Copy if Newer” ensures that the latest bits for that program always get added to the deployment package.

Figure 1 – MyPI Solution

The architecture of the solution is pretty straightforward. The web role asks for the precision of the calculation. It then places that into a queue which is read by the worker role. The worker role launches the native process with the desired precision passed as a command line argument. Standard output is redirected into a StreamReader object which is read into a string variable in the worker role. This string (which may be very long!) is then uploaded to an Azure blob. Back in the web role, the blob is read back into a string and used to populate a text box.

Click here for the screencast.

As usual the actual production version of the code (and ultimate deployment to Azure!) was a little more complicated than presented here. Still, this simple demo gives an overview of how it is possible to leverage existing code written in C/C++ to port computationally intensive legacy applications to the Azure cloud.

Happy Coding!


O’Reilly.con announced the availability of Developing Applications for the Cloud on the Windows Azure™ Platform on 10/22/2010:

Description

imageThis book demonstrates how you can create from scratch a multi-tenant, Software as a Service (SaaS) application to run in the cloud by using the latest versions of the Windows Azure tools and the latest features of the Windows Azure platform.

Full Description

image This book is the second volume in a planned series about Windows Azure technology platform. Volume 1, Moving Applications to the Cloud on the Windows Azure Platform, provides an introduction to Windows Azure, discusses the cost model and application life cycle management for cloud-based applications, and describes how to migrate an existing ASP.NET application to the cloud. This book demonstrates how you can create from scratch a multi-tenant, Software as a Service (SaaS) application to run in the cloud by using the latest versions of the Windows Azure tools and the latest features of the Windows Azure platform. The book is intended for any architect, developer, or information technology (IT) professional who designs, builds, or operates applications and services that run on or interact with the cloud. Although applications do not need to be based on the Microsoft Windows® operating system to work in Windows Azure, this book is written for people who work with Windows-based systems. You should be familiar with the Microsoft .NET Framework, Microsoft Visual Studio® development system, ASP.NET MVC, and Microsoft Visual C#® development tool.

Product Details

Title:
Developing Applications for the Cloud on the Windows Azure™ Platform
By:
Eugenio Pace, Dominic Betts, Scott Densmore, Ryan Dunn, Masashi Narumoto, Matias Woloski
 
Publisher: Microsoft Press
Formats:
  • Print
  • Safari Books Online
Print Release: November 2010 (est.)
Pages: 96 (est.)
Print ISBN: 978-0-7356-5606-2
ISBN 10: 0-7356-5606-1

Maarten Balliauw explained how to Scale-out to the cloud, scale back to your rack in this 10/22/2010 post:

That is a bad blog post title, really! If Steve and Ryan have this post in the Cloud Cover show news I bet they will make fun of the title. Anyway…

image Imagine you have an application running in your own datacenter. Everything works smoothly, except for some capacity spikes now and then. Someone has asked you for doing something about it with low budget. Not enough budget for new hardware, and frankly new hardware would be ridiculous to just ensure capacity for a few hours each month.

A possible solution would be: migrating the application to the cloud during capacity spikes. Not all the time though: the hardware is in house and you may be a server-hugger that wants to see blinking LAN and HDD lights most of the time. I have to admit: blinking lights are cool! But I digress.

imageWouldn’t it be cool to have a Powershell script that you can execute whenever a spike occurs? This script would move everything to Windows Azure. Another script should exist as well, migrating everything back once the spike cools down. Yes, you hear me coming: that’s what this blog post is about.

For those who can not wait, here’s the download: ScaleOutToTheCloud.zip (2.81 kb)

Schematical overview

Since every cool idea goes with fancy pictures, here’s a schematical overview of what could happen when you read this post to the end. First of all: you have a bunch of users making use of your application. As a good administrator, you have deployed IIS Application Request Routing as a load balancer / reverse proxy in front of your application server. Everyone is happy!

IIS Application Request Routing

Unfortunately: sometimes there are just too many much users. They keep using the application and the application server catches fire.

Server catches fire!

It is time to do something. Really. Users are getting timeouts and all nasty error messages. Why not run a Powershell script that packages the entire local application for WIndows Azure and deploys the application?

Powershell to the rescue

After deployment and once the application is running in Windows Azure, there’s one thing left for that same script to do: modify ARR and re-route all traffic to Windows Azure instead of that dying server.

Request routing Azure

There you go! All users are happy again, since the application is now running in the cloud one 2, 3, or whatever number of virtual machines.

Let’s try and do this using Powershell…

The Powershell script

The Powershell script will rougly perform 5 tasks:

  • Load settings
  • Load dependencies
  • Build a list of files to deploy
  • Package these files and deploy them
  • Update IIS Application Request Routing servers

Want the download? There you go: ScaleOutToTheCloud.zip (2.81 kb)

Maarten continues with a detailed explanation of the load settings, dependencies and other elements of the project.


Rockford Lhotka (@RockyLhotka) announced availability of the Beta 3 version of his Basic XAML Framework (BxF) from CodePlex on 10/22/2010:

Project Description:

image Basic Xaml Framework (Bxf) is a simple, streamlined set of UI components designed to demonstrate the minimum framework functionality required to make MVVM work well while leveraging the Visual Studio 2010 XAML designer ("Cider"):


Tim Anderson (@timanderson) adds his analysis to the IronPython/IronRuby transition from Microsoft’s to community support in his Microsoft lets go of IronPython and IronRuby post of 10/22/2010:

image Visual Studio corporate VP Jason Zander has announced that IronPython and IronRuby, .NET implementations of popular dynamic languages, are to be handed over to the open source community. This includes add-ons that enable development in Visual Studio, IronPython Tools and IronRuby Tools. Of the two, IronPython is a more mature and usable project.

Why? Here’s a few reflections. For what it must cost Microsoft to maintain these projects, versus the goodwill it earns in the open source world, I would have thought they represent good value and I am surprised to see them abandoned.

imageOn the other hand, it is easy to see that they are not commercial propositions. I’d guess that they were more valuable a few years back, before C# acquired dynamic features, and when dynamic languages were strongly in vogue and Microsoft was keen not to allow .NET to fall behind. To some extent dynamic languages are still in vogue, but we are now well past what is “the peak of inflated expectations” in the Gartner Hype Cycle, and few are likely to abandon .NET because it does not have an official Python or Ruby.

The other reason they are not commercial propositions is that Microsoft has under-invested in them. I recall Martin Fowler at ThoughtWorks telling me that JRuby, an implementation of Ruby for the Java Virtual Machine, is important to their work; it lets them work in a highly productive language, while giving clients the reassurance of running on a trusted and mature platform. JRuby performs very well, but IronRuby is a long way behind. Perhaps if Microsoft had really got behind them, one or both of these language could be equally significant in the .NET world.

The fact that F# rather than IronRuby or IronPython turned up as a fully supported language in Visual Studio 2010 is also significant. After talking to F# leader Don Syme – see the interview here – I understood how F# was important to some of Microsoft’s key customers in the financial world; and I’m guessing that neither Python nor Ruby had that kind of case made for them within the company.

Although it is a shame that Microsoft is withdrawing official support, the clarity of Zander’s statement is better than leaving the projects in limbo. The folk appointed as project leaders are also very capable – Mono guru Miguel de Icaza is on both teams and a great motivator, though it seems unlikely he will have much time to devote to them given his other commitments – and this may actually be good rather than bad news for the projects themselves.

Jim Hugunin, creator of both Jython (Python for Java) and IronPython, is leaving Microsoft for Google, and his farewell is worth a read. He says C# has evolved into a nicer language than Java, but notes:

I like to have a healthy relationship with Open Source code and communities, and I believe that the future lies in the cloud and the web. These things are all possible to do at Microsoft and IronPython is a testament to that. However, making that happen at Microsoft always felt like trying to fit a square peg into a round hole – which can be done but only at major cost to both the peg and the hole.

Related posts:

  1. Why F# rather than IronPython in Visual Studio 2010?
  2. Dynamic language slowdown at Microsoft?
  3. Book Review: IronPython in Action


Jason Zander posted New Components and Contributors for IronPython and IronRuby on 10/21/2010:

image The CLR has always been a great environment for dynamic languages and over the last several years we have built out additional dynamic language support for the .NET Framework through efforts like the Dynamic Language Runtime (DLR) and language implementations on top of the DLR. The DLR shipped earlier this year as a built-in component of .NET Framework 4, and we now have several great language implementations built on top of it.

IronPython and IronRuby are two dynamic language implementations that we have incubated internally the last few years. We have released several versions of both language environments (IronPython releases and IronRuby releases), and all of the source code has been released under open source licenses (recently moved to Apache License V2.0).

Today we are announcing new leadership for the Iron projects and a development model that will enable the broader community to contribute to their development:

  • The community can now make source contributions to any component of IronPython and IronRuby.
  • For both IronPython and IronRuby, we’ve made changes to the CodePlex projects to allow community members to make contributions without Microsoft's involvement or sponsorship by a Microsoft employee.
  • We’ve already released the IronPython Tools for Visual Studio that we developed under Apache 2.0. We’ve received great early feedback on the IronPython language service for Visual Studio. Today we are releasing the prototype code for IronRuby Tools for Visual Studio, and we expect similar feedback for IronRuby tools as well. Releasing these components under the Apache 2.0 license allows for community members to use the functionality and also contribute to the IronPython and IronRuby language services.
  • We have done a lot of ground work for the next version of IronPython v2.7 and IronRuby v1.9.
  • We have fixed a lot of infrastructure so that the community should be able to regression test all language updates using our tests.
  • We have enabled a full release work flow to produce builds and releases straight from the CodePlex projects. Previously, these could only easily be done from our own source depots.

As part of these changes I’m happy to announce new project leaders external to Microsoft who will take over the projects and provide leadership going forward. The IronPython project will have Miguel de Icaza, Michael Foord, Jeff Hardy, and Jimmy Schementi as Coordinators. Miguel de Icaza and Jimmy Schementi will be the Coordinators of IronRuby. All of these guys have worked with or on the Iron projects since their inception and I have nothing but trust and respect for the new stewards of these community projects.

Overall, I hope the effect of the changes is to dramatically increase the opportunity for community members to contribute their own code to IronPython and IronRuby, and to actively participate in these projects.

The IronPython and IronRuby projects began as an effort to improve support for dynamic languages in the .NET Framework and to diversify our portfolio of programming languages. These language projects have helped thousands of people since they began, and they have added value to the .NET Framework. They helped create the Dynamic Language Runtime in the .NET Framework 4, on which we have also built C#'s new 'dynamic' keyword and improved Visual Basic's late-binding support. We’ll continue to invest in making the .NET Framework a great runtime environment for dynamic languages going forward.

Working with the community has always been an essential part of developing IronPython and IronRuby, and the feedback and the community review of the source code and specifications has been invaluable. We are looking forward to this new level of involvement from the IronPython and IronRuby communities, and think it will help advance the languages even further.


Josh Holmes explained how to use Zend SimpleCloud and Azure in this detailed, illustrated 10/21/2010 post:

image I’ve been playing with Zend’s SimpleCloud API for the webcast that I’m doing with Zend today. I started with the Zend Framework Quickstart tutorial but changed out the backend to hit the Azure Tables and such (well kinda – I used Zend Studio 8 Beta 2 and didn’t use the ZF tool but I still created a little guestbook).

imageI’m going to expand this example to include blob storage and queues as well in the near future but at the moment, I’m just going to hit the Azure Tables.

To get started, I downloaded and installed the Zend Framework CE 1.10 and Zend Studio 8 Beta 2. Then I downloaded and installed the Windows Azure SDK.

image

The last bit that I needed was the Windows Azure 4 Eclipse which will install inside of Zend Studio since it’s built on Eclipse. To install it, open up Zend Studio/Eclipse and select Help | Install New Software to open up the dialog. Then click Add… and fill in the location as http://www.windowsazure4e.org/update. Click OK, select the Windows Azure for Eclipse Toolkit and follow the rest of the wizard to install it. At this point, I’ve got all of the software installed that I need to install and am ready to start coding.

Creating the Project

Before I create the project, a quick tip is that it’s a lot easier to work with IIS if you move the your Eclipse Workspace to c:\users\public\ZendWorkspace (I’m on Windows 7 so that’s where my public documents are). One more quick step is that I give IUSER Read and Execute permissions on the workspace.

imageOnce I’ve moved my workspace, in Zend Studio, select File, New Zend Framework Project.

Name the project SimpleCloudDemo.

Select “Create new project in Workspace”. I tried creating the project on a local server to skip a few steps but that didn’t work so well as you have to be an administrator to write to the c:\inetpub\wwwroot location. Instead, we’ll just map a virtual directory in IIS in a few moments.

Make sure that Zend Framework default project structure is selected (should be the default).

Click Finish. This will create basic project structure that you’ll need to get started. The Zend Framework is a MVC style framework.

To finish setting up the project we need to include the framework bits and the API bits so that we have everything in a nice portable folder. Copy in the C:\Program Files\Zend\ZendServer\GUI\library\Zend directory to [project dir]\library.

Lastly, download the SimpleCloud Api from http://simplecloud.org/download and unzip it to the [project dir]\library directory.

Mapping the IIS Virtual Directory

Now we want to be able to test and make sure that everything is installed correctly and that the project works. To do this, we’re going to map a IIS virtual directory.

image Open Internet Information Services (IIS) Manager and expand the tree on the left hand side until you find the default web site.

Right Click on the Default Web Site and select Add Virtual Directory…

Fill out the Alias with something simple to remember such as simpleclouddemo and fill in the Physical path with the directory to [your project directory]\public. Since I moved my workspace up above, the full Physical path that I entered is c:\users\Public\ZendWorkspace\SimpleCloudDemo\public

image Now, browse to the virtual directory at  http://localhost/simpleclouddemo.

The one other thing that I’ll do that’s IIS specific is create a URL_Redirect rule that will make sure that the Zend Framework actually gets all of the calls rather than the calls just going into the IIS bit bucket. The easiest way to do that is to create a file called web.config in the public directory. …

Josh continues with detailed source code and other instructions to complete the project.


The Windows Phone Team suggested on 10/21/2010 that you download the Windows Phone Developer Tools October 2010 Update:

image Brief Description:

October 2010 Update to the Windows Phone Developer Tools to provide two new utilities and address a performance issue in the Bing Maps Control.

Overview

The Windows Phone Developer Tools October 2010 Update includes:

  • Windows Phone Capability Detection Tool – Detects the phone capabilities used by your application. When you submit your application to Windows Phone Marketplace , Microsoft performs a code analysis to detect the phone capabilities required by your application and then replaces the list of capabilities in the application manifest with the result of this detection process. This tool performs the same detection process and allows you to test your application using the same list of phone capabilities generated during the certification process. For more information, see How to: Use the Capability Detection Tool.
  • Windows Phone Connect Tool – Allows you to connect your phone to a PC when Zune® software is not running and debug applications that use media APIs. For more information, see How to: Use the Connect Tool.
  • Updated Bing Maps Silverlight Control – Includes improvements to gesture performance when using Bing™ Maps Silverlight® Control.

System Requirements

  • Supported Operating Systems:Windows 7;Windows Vista
  • You must have Windows Phone Developer Tools RTM version installed.


James Ashley reported WP7 Deactivated != Tombstone in this 10/21/2010 post:

With the transition from the Beta WP7 Dev tools to the RTM, an important and subtle change was introduced to the way launchers and choosers work.  In the beta, it was a given that every time we launched a task from Silverlight, the current application would be tombstoned and the Deactivated event would be thrown on the current PhoneApplicationService object.

imageWith the RTM tools, this is no longer always the case.  Five tasks break this rule: CameraCaptureTask, EmailAddressChooserTask, MediaPlayerLauncher, PhoneNumberChooserTask and PhotoChooserTask.  In each case, while the application may be tombstoned, it also may not be.  In fact, most of the time, it will simply be paused and no tombstoning will occur – the application will not be terminated.  

We can assume that the typical workflow for a chooser task is the following (the images below demonstrate the PhoneChooserTask in action):

phonechooserback

A user performs an action that initiates a task.  The task window opens. The user either completes the task (in this case by selecting a contact) or presses the Back button to cancel the action.  The user is returned to the previous page.  In this case, it makes sense that no termination occurs.  Instead, the app is left in a Paused state much as an app is paused during incoming and outgoing calls – timers are suspended, no events are handled.

[Note: in the RTM, no event is fired when an application goes into a paused state.  At best, you can handle RootFrame.Obscured and RootFrame.Unobscured for incoming and outgoing calls.]

However, the user may also decide to press the Start button at the third step.   At that point it makes sense for termination to occur as it is less likely that the user will backpress back to the app.phonenumberchooser

So when should we handle the deactivated event for the second case where the user moves on and doesn’t complete a chooser task?  We actually can’t handle it when tombstoning occurs because our app is paused and will not respond to any events.

Instead, the PhoneNavigationService.Deactivated event is fired when chooser task (or MediaPlayerTask) is initiated.  This despite the fact that we don’t know (and can’t know) at this point whether the app will actually be tombstoned.

So far so good.  We may unnecessarily be writing objects to storage if the app isn’t ever tombstoned, but it’s better to be safe than sorry.

What is peculiar is that when we return to the app – either through the first scenario above or through the second deviant scenario – PhoneNavigationService.Activated is always thrown.  There’s a symmetry to this.  If the Deactivated event is called and we back into the application, then the Activated event will be called. 

The somewhat annoying thing is that the PhoneApplicationService should have enough information to avoid firing false Activated events unnecessarily.

No matter.  There is a simple trick for finding out whether an Activated event is real or fake – whether it truly follows a tombstoning of the application or is simply thrown because Deactivated was previously called.

Use a flag to find out if the App class was newed up.  It only gets newed up in two situations – when the application is first launched and when it is recovering from tombstoning.  Set the flag to false after the App class has finished loading.  If a chooser is launched and the application is paused but not tombstoned, the flag will still be false.  If tombstoning occurs, the flag will be reset to true.

private bool _isNewApp = true;

private void Application_Launching(object sender
    , LaunchingEventArgs e)
{
    _isNewApp = false;
}

private void Application_Activated(object sender
    , ActivatedEventArgs e)
{
    if (_isNewApp == true)
    {
        // a real tombstone event occurred
        // restore state
    }
    _isNewApp = false;
}

If you are handling tombstoning at the page level, you can similarly use a flag to determine whether the page was re-newed or not.

bool _isNewPage = true;

public MainPage()
{
    InitializeComponent();
}

protected override void OnNavigatedTo(NavigationEventArgs e)
{
    if (_isNewPage)
    {
        //restore state
    }
    _isNewPage = false;
}

The important thing to remember, though, is that the Deactivated / Activated events are not synonymous with tombstoning.  They simply occur alongside of tombstoning – and in the special cases described above occur when no tombstoning happens at all.

James is the author of Patterns of Windows Phone Architecture Part III and earlier.


<Return to section navigation list> 

Visual Studio LightSwitch

• John Rivard, Campbell Gunn and Sheel Shah of the LightSwitch Team explained How to create a RIA service wrapper for OData Source in a detailed tutorial published 10/22/2010:

Introduction

image22242There has been a lot of discussion of whether there is OData support in Visual Studio LightSwitch. The answer is both Yes and No. No, there is no Native support for OData in version 1 of Visual Studio LightSwitch, but ‘yes’ there is a workaround.

LightSwitch v1 has native support for SQL Server and SharePoint data sources. But you can write some custom code to provide access to another data source. This post will show you how to access an OData source in LightSwitch by wrapping access to in a WCF RIA DomainService.

These instructions assume you already have an OData service available. There are a number of guides available on how to create a new OData service (http://msdn.microsoft.com/en-us/library/cc668810.aspx)

The basic steps of creating the RIA service wrapper are as follows:

1. Create a class library project.

2. Add a WCF Service Reference to the project to provide access to the external OData source.

3. Add a WCF RIA DomainService to expose the OData DataServiceContext.

Once your DomainService has been defined, you can add it to your LightSwitch project using via the “Add New Data Source” wizard.

clip_image001

The main point in the illustration above is: The LightSwitch data service calls an in-memory instance of a RIA DomainService which calls a DataServiceContext (generated by add-service-reference), which calls remotely to an OData service.

You should be aware that there are a couple limitations on the OData Services that can be exposed using RIA Services and LightSwitch.

Complex Types

While both OData and RIA Services support complex types on entities, due to scheduling constraints LightSwitch will not.  If a complex type property is exposed on an entity, LightSwitch will import the entity, ignoring that property.   There are a couple of workarounds for this that we will detail in another blog post.


Navigation Properties without Foreign Keys

An OData service can contain navigation properties that are not associated with any foreign key. This is likely the case with many-to-many relationships, but can also occur for 0..1-Many or 1-Many relationships. For example, the Netflix OData catalog contains a many-to-many relationship between Titles and Genre. Unfortunately, RIA Service associations are foreign key based. If an OData association is not foreign key based, there isn't a good way to represent it over a RIA Service.

If an OData service does contain these types of associations, there isn't currently a way to represent these in LightSwitch. However, in our next beta we will be adding the ability to call parameterized queries on a RIA Service from LightSwitch. Using this functionality, queries that represent these unsupported associations could be exposed. For Netflix, for example, you could define GetGenresByTitle and GetTitlesByGenre queries on your RIA Service, which call into the appropriate OData navigation properties.

With these limitations in mind, I've defined a LightSwitch-compatible OData service.  To keep things simple, this service only contains two entity types, Product and Category.  There is a 1-Many relationship between Category and Product. I’ve listed the definitions for the entities below:

Category
ID (Int32)
CategoryName (String)
Description (String)
Products (Collection of Product)

Product
ID (Int32)
ProductName (String)
QuantityPerUnit (String)
UnitsOnHand (Int32)
CategoryID (Int32)
Category (Category)

We will define our LightSwitch-compatible RIA Service in a standard class library. Using Visual Studio 2010 Professional, create a new Class Library project. To access the OData service, we will need to add a service reference to the project.   Add a service reference to your project, entering the address of your particular OData service. I’ve added my service reference using the ProductCatalog namespace.

clip_image003

Adding the service reference will define classes representing the entities, and a DataServiceContext object to read and write data from the service. To create a RIA Service, we can simply expose these defined entity classes through a DomainService class. Add a DomainService class to the project (I’ve called it ProductService in my case).

clip_image005

A prompt will appear, asking whether we’d like to base this DomainService on an existing model. We can just select <empty domain service class> and hit Ok. We’ll need to add some functions to expose queries for each OData entity. To keep things simple, we’ll define a read-only DomainService. This requires that we add a query function for each entity type we’d like to expose. For LightSwitch, we require that each entity type has a parameterless query exposed, with the QueryAttribute applied. This allows us to identify which query represents the “Select *” operation for that entity type. We can then apply additional filters to this query from LightSwitch.

I’ve added three methods to my DomainService below, Initialize, GetProducts and GetCategories. The Initialize method will be called for each request to the DomainService. Within it, I’m instantiating my DataServiceContext to communicate with the OData service. This context is then used to return the query for my other two functions. …

The team continues with VB and C# source code, as well as additional screen captures.


V.N.S Arun listed the Top 7 Features of Visual Studio LightSwitch in a 10/22/2010 post to the CodeGuru blog:

Microsoft introduced LightSwitch applications very recently. It is integrated with Microsoft Visual Studio which can be a standalone on Visual Studio 2010 or could accommodate itself as a part of the normal Microsoft Visual Studio 2010 IDE. The LightSwitch applications are introduced for creating a line of business applications which are more data-centric.

Most data-centric business applications perform the same basic CRUD (Create, Read, Update and Delete) operations. In such applications what the developer would do again and again is writing the same kind of code and developing a similar kind of UI.

Microsoft Visual Studio LightSwitch revolves around selecting the type of screen and defining the data for the screen. Below are the list of ready made screen types that Lightswitch provides.

  1. New Data Screen
  2. Search Data Screen
  3. Details Screen
  4. Editable Grid Screen
  5. List and Details Screen

The Lightswitch application eases the job of the developer by asking him to define a data source with a well defined schema and simply add the appropriate screen for it. Maybe a little customization and business logic should be written if required.

Microsoft Visual Studio LightSwitch can be downloaded from here. Note that only the Beta version of Visual Studio Lightswitch is released.

Fast Paced Development of Data-centric Applications

When using LightSwitch the development time is reduced as the developer doesn't have to spend much time on the UI and data access since the data screens are readily available and it can do the data access on its own. If any customization is required only then does the developer needs to intervene. This ensures a couple of things:

  1. The data-centric business applications are developed fast and in a stable manner.
  2. The final product gets into the user's hand in no time.

For example here are the steps for creating a sample working screen.

  1. Create the table to store data and define the schema for it.
  2. Select Add New Screen and select the required screen as shown in Fig 1.0

    Select Add New Screen and select the required screen 
    Fig 1.0 (Full Size Image)

  3. Run the application, you are done with add new or search module based on the screen you have selected.

By considering the above information you can understand how fast a data-centric application can be developed using LightSwitch.

Easy to Choose Whether the Application is to be Browser Based or Desktop Application

This is really an astonishing fact. Declaration of your application to run on a desktop environment or on a web browser is just a click away. Believe me this declaration shouldn't need to be done before starting off with the project but can be done during the course of development or even if the development is complete. All you need to do is go to the properties of the project and choose the application type that's it you are done. Check Fig 2.0

 choose the application type 
Fig 2.0 (Full Size Image)

Fig 2.1 shows the sample application selected to run as a desktop client

sample application selected to run as a desktop client 
Fig 2.1 (Full Size Image)

Fig 2.2 shows the sample application opted to be run in a web browser

sample application opted to be run in a web browser 
Fig 2.2 (Full Size Image)

Keywords: VISUAL STUDIO, MICROSOFT, LIGHTSWITCH,

About the Author
I work for an MNC in Bangalore, India. I am fond of writing articles, posting answers in forums and submitting tips in dotnet. To contact me please feel free to make use of the "Send Email" option next to the display name.


Kim Schmidt asked and answered Why Use the MVVM Pattern with Silverlight Applications? in a 10/21/2010 post to David J. Kelly’s Hacking Silverlight blog:

image The probability of hearing or reading about the title question of this article if you are either a Silverlight or WPF developer is substantial. However, there are significant problems when searching for a proper answer to this question.

image22242To begin with, there are numerous and varied ways in implementing methodologies like MVVM, most of which are used based on personal preference. To a novice trying to educate themselves, this leads to disparity in information. Complicating this factor is that many of the articles or videos that attempt to describe the MVVM pattern also include components of other architectural patterns. This adds unnecessary complexity to understanding the MVVM pattern because of the inconsistencies in the pattern being described. In this article, I will elucidate only the MVVM pattern - nothing more, nothing less.

First of all, let me answer the title question succinctly. Silverlight (XAML) and the Model-View-ViewModel (MVVM) architecture evolved together, thereby affecting each other. In effect, this means that inherent in Silverlight's framework elements and CLR objects are the mechanisms to implement MVVM's loose coupling and separation of concerns. I will go into greater detail on the specific classes and objects in .NET that are primarily involved in hooking everything up further on in this article.

The loose coupling and separation of concerns translates to the ability of large developer teams to work independently on different parts, bringing the pieces together at runtime utilizing classes or object interfaces (as opposed to user interfaces). Another enormous benefit of this is that not only can multiple developers work on different parts of the application simultaneously, designers - for the first time in the history of .NET development - and developers can work on the same code at the same time. Having designers "speaking the same language" as developers solves the longstanding dilemma of a developer taking what the designer gives them and having to rewrite everything to work with the applications. Furthermore, designers - for the first time also - can see what they are doing with data driven controls in the design view in Expression Blend. It's evident that the workflow between professional graphic designers and application developers has been monumentally improved.

Let me back up a bit and give an overview of what MVVM is and does for those of you who aren't familiar with it. MVVM, as architecture, is the mature, successful version of what n-tier attempted to accomplish: quarantine the user interface from the program logic and data. Where Model-View-Controller (MVC) may sufficiently accomplish this goal for ASP.NET applications, MVVM is a refined evolution custom-fit for Silverlight.

There have been intermediate patterns between n-tier and MVVM, all with the same goal, but none of them truly accomplishing the objective until the advent of MVVM and XAML. XAML, being an extension of XML, is inherently tool-able, resulting in the ease of building visual and other editors for those who use it.

If a new Silverlight developer were to dive into a new project without knowing better, they might attempt to put all logic into the codebehind of the MainPage.xaml.cs, as typically has been done in ASP.NET pages. Not only would this lead to difficult testing scenarios, but this methodology doesn't lend itself to long-term maintainability or extensibility. Testing code built like this needs a user interface (UI) to run and a human to debug, which adds to the complexity of finding errors. On the other hand, by using the MVVM architecture, only the "ViewModel" (which will be explained in the next couple of paragraphs) need be tested and verified before ever being bound to the UI.

Properly using MVVM, there is much less codebehind in MainPage.xaml. This is simply pure UI. Each entity in MVVM has its unique tasks, and they do them extremely well with complete separation. MVVM is an acronym for Model-View-ViewModel; let's elaborate on the functionalities of each entity.

At the uppermost level we have the "View". Ideally, the view consists only of the XAML UI and related UI logic. These are the Silverlight screens that are presented to the user. The View's responsibilities are to present data to end users and collect data from end users, period.

At the lowest level we have the "Model". This represents the entities that live on the server as well as the objects that are responsible for interacting with the data stores the application uses along with data persistence. Data interaction in Silverlight can be anything from RIA Services to web services or raw XML. Any CLR-object can be the binding source.

In between these two entities is the "ViewModel". This entity's responsibilities are numerous, but can be summarized as aggregating data that will be bound to the View, retrieved from the Model. This includes methods and states. Since Silverlight doesn't databind to methods, just properties and dependency properties, most of our data logic needs to be in property setters and getters in this ViewModel.

As previously promised, now I'll explain the specific objects in the .NET Framework that are involved in making MVVM and Silverlight work together. The binding mechanism in Silverlight links the View and ViewModel through primarily dependency properties and data context. Each framework element in the View (controls, UI elements, etc.) has a dependency property. These properties can be bound to instances of exposed public properties in the ViewModel. The ViewModel can update the View via the INotifyPropertyChanged interface in the ViewModel base, which is used to discover if the value of the properties have changed by raising the OnPropertyChanged event. This is a two-way conversation that extracts all of the data and logic from the View, but doesn't alter the UI's normal functionalities.

Lastly, I'd like to describe one last tremendous benefit of utilizing this pattern. Because all the logic is in the ViewModel, this entity can be copied from a Silverlight application and inserted into a WPF or Surface application, for instance. This cross-platform extensibility greatly increases return on investment (ROI) for companies that target multiple platforms.

- By Kim Schmidt, Guest Author from the Silverlight Group


image22242See Laurent Duveau announced that he’s Speaking at TechDays Canada 2010 about Internet Explorer 9, Silverlight, and LightSwitch in a 10/21/2010 post in the Visual Studio LightSwitch section below.


Return to section navigation list> 

Windows Azure Infrastructure

•• Sajo Jacob (@sajo) answered the Windows Azure 101: Cloud Service Model- SaaS/PaaS/IaaS? question on 10/24/2010:

Platform as a Service/PaaS

image Windows Azure at the moment supports the PaaS model, so in other words consumers can build and deploy cloud applications created by using programming languages and tools supported by the provider. The consumers don’t have to manage or control the underlying cloud infrastructure i.e. network, servers, operating systems, or even local storage.

Infrastructure as a Service/IaaS

image

Windows Azure will support “VM role” sometime in the near future which should put Microsoft in the IaaS arena along with Amazon EC2.

With the IaaS model the consumer is provided the ability to provision compute, storage, networks and other fundamental computing resources. The consumer can choose to deploy and run arbitrary software without having to manage or control the underlying cloud infrastructure, while still maintaining control over the operating system, storage, deployed applications and certain networking components.

A picture is definitely worth a thousand words, here is a high-level overview I put together to help visualize the Cloud service models in terms of the scope of management

ServiceModel thumb Windows Azure 101: Cloud Service Model  SaaS/PaaS/IaaS?

Software as a Service/SaaS

On-Premise model is by far the most used model among ISV’s to deliver applications to their clients. Each time a new client is brought on-board, the solution is deployed to the client’s on-premise location. From thereon, the client needs to manage and control the entire breadth of the infrastructure i.e. network, servers, operating systems, storage, individual application capabilities and even datacenter logistics.

So something like this:

OnPremise thumb Windows Azure 101: Cloud Service Model  SaaS/PaaS/IaaS?

SaaS is all about providing customers the ability to use the software vendor’s applications running on a cloud infrastructure which are accessible from variety of client devices. The customer doesn’t have to deal with the hassle of managing/controlling/securing the underlying cloud infrastructure except for application configuration settings/customization.

SaaS thumb Windows Azure 101: Cloud Service Model  SaaS/PaaS/IaaS?

Using the SaaS model, the ISV can now provide the goodness of Cloud computing in addition to providing their clients with high scalability and reduce costs by resource pooling and taking advantage of a multi-tenant model. From an application developer perspective, you should be able to leverage and use SaaS as long as the cloud service provider supports IaaS/PaaS. So the answer with Windows Azure is yes again, you can use SaaS with Windows Azure.

If you need to see an example of this, Vittorio Bertocci just released a great sample of running using SaaS with Azure that you can look at.


•• David Linthicum posted When Considering Services... to ebizQ’s Where SOA Meets Cloud blog on 10/24/2010:

image Services are the building blocks of SOA, and cloud computing for that matter, and like building blocks of a house or a building, the quality will define the value of the finished product. In this case, the SOA itself. Thus, spending time on what services do, how to define them, how to design them, and how to build them is a good investment in time, and something that's missing within many architectures.

image Clarify these services issues at the outset of a SOA or cloud project to build better blocks:

  • First, services don't need to be Web services. However, this is a confusing statement for those of you who have been absorbing the hype. The fact is, you can build a SOA without Web services, opting for more traditional approaches such as transactions, distributed objects, or custom software systems. Indeed, when considering "special needs architectures," such as those requiring high performance, the use of Web services are clearly contraindicated.
  • Second, services produce behavior and data, not just data. Most who design, create, and/or expose services think of them as data providers, and indeed they are in most instances. You invoke the service, data is produced in the context of a structure, and consumed into another system. However, while many services are very data-oriented, services are able to provide behavior as well, or, the ability to do something around the containment of the data, or perhaps provide behavior without data at all.
  • Third, services are not applications, and should not be designed like applications. As you'll see below, services have their own specific design orientation. The way you define and design a service is very different than what many consider traditional application design. You're building a much smaller system that exists within many systems, and thus special attention needs to be paid to interoperability, granularity, core purpose, and testing approaches.
  • Finally, each service has a specific purpose, and they are not complex or naturally dependent upon other services. Thus, they are easily abstracted into composite applications, in essence, leveraging these services as if they are functions local to the composite. This is where exposed services have a tendency to fall down. Since they were not designed, but abstracted, they typically have far too many dependencies to be as useful as services that were designed correctly from scratch. That's the tradeoff. Services should exist with a high degree of autonomy. They should execute without dependencies, if at all possible. This allows you to leverage the service by itself, and design the service with this in mind no matter how course- or fine-grained the service is.

Robert Mullins reported “Sessions on Azure, new mobile platform featured at developer conference” in a deck for his Cloud, Phone 7 on tap at Microsoft PDC10 post of 10/23/2010 to NetworkWorld’s Microsoft Tech blog:

image Microsoft will be opening the doors of its corporate campus in Redmond Oct. 27-29 for its annual Professional Developers Conference at which it will be building momentum for its cloud computing initiatives and the development of applications for its new Windows Phone 7 operating system.

image

PDC10 will begin with a keynote Thursday by CEO Steve Ballmer, who will share the stage with Bob Muglia, president of the Server and Tools Business at Microsoft. In that role, Muglia is responsible for Microsoft’s infrastructure software, developer tools, and cloud platform, including products such as Windows Server, SQL Server, Visual Studio, System Center and the Windows Azure Platform. This means Muglia, along with Ballmer, will be pumping up software developer attendees about Microsoft’s “We’re all in” strategy on cloud computing, which Ballmer launched back in March.

Sessions Thursday and Friday will cover how to build, deploy and manage applications running on Azure, which is the cloud version of Windows Server. Other sessions will be devoted to running Java applications on Azure, managing identity and access control in the cloud, building databases on SQL Azure and integrating SharePoint with Azure.

Blogger Tim FitzGerald reported today that we may hear more at PDC10 about Microsoft’s plans to deliver cloud computing as an Infrastructure-as-a-Service (IaaS) offering. FitzGerald, an executive at Avnet Technology Solutions, writes that Azure is currently available only as a Platform-as-a-Service (PaaS), but that another Microsoft executive, Zane Adams, general manager of Azure and Middleware Server and Tools Business, told attendees at a conference in the U.K. that Microsoft was expecting to be a player in IaaS, PaaS and software-as-a-service (SaaS). Positioning itself as a player in all three spaces would put it in competition with Amazon's EC2 (IaaS), Google (PaaS) and Salesforce.com (SaaS), which could be a lot to take on.

And Microsoft needs to woo the development community to tackle another highly competitive space in mobile with its coming Windows Phone 7 OS. Microsoft released the final version of its development tools for Phone 7 in September and the smartphones running it are to go on sale in the U.S. Nov. 8. Sessions on Phone 7 include one on how to use Azure to build Phone 7 apps that would be backed by scalable cloud components. Another one covers how to build apps that run on Silverlight. And a third session looks at building game applications using XNA Game Studio, which is the Microsoft integrated development environment for Xbox. And running Xbox games on a Phone 7 device is expected to distinguish Phone 7 devices from Android, Apple and BlackBerry devices.


David Linthicum asserted “With the release of Office 365, we could finally see big cloud computing traction from Microsoft” and asked Does Microsoft finally have cloud computing right? in this 10/22/2010 post to InfoWorld’s Cloud Computing blog:

image It's been a strange week for Microsoft. With the departure of Ray Ozzie, who has been promoting Microsoft's cloud computing efforts for some time, you'd think Microsoft is on the cloud computing ropes. However, while Ozzie was packing up his boxes, Office 365 went live, and it could be the one push that places Microsoft much deeper in the cloud computing game.

image Office 365 features Office, SharePoint Online, Exchange Online, and Lync Online as a bundled, hosted package. While it's not Microsoft's first office automation solution delivered as a service, the hope is that a cloud-based solution equipped with features and functions that are equivalent to the now aging client-based Office software will provide an easy path to the cloud for Microsoft customers. If it can pull off that transition, Microsoft may find itself leading the cloud-based office automation space quickly, perhaps passing Google as early as next year.

imageWhy will Microsoft win that war? Answer: The millions of existing Office users, such as myself, who don't want to learn a new interface to use Google Apps or other cloud-based productivity programs. If it took five years to get your mom functional on Word, do you think you can convince her to switch to Google Apps? Not a chance.

Moreover, although many enterprises don't yet trust Microsoft to deliver enterprise development and deployment platforms in the cloud (meaning Windows Azure), they don't mind trusting Microsoft with their word processing, presentation, and email client needs.

While those in the IT community can argue over the technical differences between the competing cloud-based productivity applications, most rank-and-file users don't care. Thus, as IT departments ponder the movement to the cloud, Office 365 will prevail, as it offers the path of least resistance.


•• David Makogon (@dmakogon) reported New employer [Microsoft], new Azure role [Developer Evangelist] on 10/24/2010:

image For over five years, I’ve been fortunate to work for RDA, a consultancy headquartered in Baltimore, MD. The company is a class act, with great people.  I’ve worked on nearly 20 engagements, with technology all over the .NET map. My last day with RDA was Wednesday. Let me elaborate a bit…

image

About two years ago, I started working with Azure, Microsoft’s cloud computing platform. My first project was with Coca Cola Enterprises. Then, in 2010, I spent almost 6 months “on loan” to Microsoft, as an Azure Virtual Technology Specialist. In my V-TS role, I worked with over a dozen customers, helping them with Azure migration solutions.

Over the past year, I’ve been speaking about Azure all over the Mid-Atlantic, at user groups, code camps, and even an Azure Bootcamp. If you couldn’t tell by now, let me spell things out for you: I really, really enjoy working with, and teaching, Azure.

On October 1, only a few short weeks ago, I was honored with an Azure MVP Award from Microsoft (I blogged about this earlier). I couldn’t be happier! Through the MVP program, I’ve met some seriously-talented Azure folks that share my enthusiasm and passion for the platform.

Ironically, at the same time the MVP announcement came out, I had been looking into a new role at another company. A perfect-fit role, one that I simply could not say no to. A role that would be dedicated to Azure.

The role? Azure Architect Evangelist, Mid-Atlantic.

The company? Microsoft.

I'll be a member of the Developer and Platform Evangelism (DPE) team. My primary responsibility will be working with ISVs, helping them migrate their applications to Azure. As this position specifically covers the mid-Atlantic area, I won't have to relocate.

And that brings me to today. I’m sitting on a plane, en route to Redmond. I officially become a Microsoftee tomorrow morning, only 3 days days before the Azure-heavy Professional Developers Conference, being held on the Microsoft campus. The PDC will be a great way to kick off my Microsoft career.
With Microsoft as my new employer, I’ll have to step down as an active MVP, effective Monday morning. However, that little technicality has no bearing on my developer community participation. In fact, I have three talks scheduled in November: Two Azure talks Nov. 6 at CMAP Code Camp in Columbia, MD, and an Azure+MongoDB talk at the Mongo DC conference, Nov. 18.

I’ll close this post out now, as I have lots to do (including another Azure post). I’m totally stoked about this career move!!!


Steven Nagy posted Introducing your Microsoft MVPs in Windows Azure on 10/22/2010:

image On 1st October Microsoft awarded 26 candidates in a new MVP category: Windows Azure. The Microsoft “Most Valuable Professional” award is an acknowledgement and thankyou to individuals for contributions to the technology community.

clip_image001You will typically find MVPs answering questions on forums, speaking at your local user groups and conferences, blogging, tweeting, and generally trying to help with understanding and adoption of particular technologies. They give up countless hours of their personal time doing this so they really do deserve a pat on the back.

image

This is the first time the Windows Azure category has been awarded which makes this particular round even more special. So congratulations to all the awardees and thanks from the community for the hard work you’ve done there.

You can find an MVP in any technology by simply accessing the MVP website:

From here you can find an MVP via the links on the left. Here’s the list of Windows Azure MVPs:

I’ve gathered some information on a few of them so you can more easily find their blogs and twitter accounts. Feel free to say hello, they really are a very friendly bunch!

 

Name Twitter Blog Country
Niraj Bhatt nirajrules http://nirajrules.wordpress.com/  
Andrew Wilson awilsong http://pinvoke.wordpress.com USA
Brent Stineman brentcodemonkey http://brentdacodemonkey.wordpress.com/  
Jim Zimmerman jimzim    
David Makogon dmakogon http://www.DavidMakogon.com [See above post]  
Nico Ploner   http://nicoploner.blogspot.com Germany
Viktor Shatokhin way2cloud http://way2cloud.ru Ukraine
Panagiotis Kefalidis pkefal http://www.kefalidis.me Greece
Rainer Stropek rstropek http://www.timecockpit.com Austria
David Pallmann davidpallmann http://davidpallmann.blogspot.com USA
Cory Fowler syntaxc4 http://blog.syntaxc4.net USA
Sergejus Barinovas sergejusb http://sergejus.blogas.lt Lithuania
Gisela Torres 0GiS0 http://www.returngis.net Spain
Steven Nagy snagy http://snagy.name Australia
Jason Milgram   http://linxter.com/blog USA
Michael Wood mikewo http://mvwood.com  
Michael Collier michaelcollier http://www.michaelscollier.com  


James Parrish posted Windows Phone 7 and Windows Azure: Foundations for the Next Client-Server Architecture? on 10/21/2010:

I was having another conversation about my thoughts on adding Windows 7 Phone to the MIS curriculum with some friends of mine whom specialize in entrepreneurship.  When I mentioned that USA Today had reported that mobile business applications will be the #1 type of software being developed by 2010, they seemed skeptical.  “They don’t have the processing power required to run meaningful business apps,” was the comment that I received.  That caused me to really think.  Is the mobile device the next paradigm shift in computing?  I mean, these guys are very intelligent, they understand new innovations, and they are far from being Luddites.

I was feeling a little less secure in my convictions when, in the midst of a conversation with another faculty member on Imagine Cup projects we are involved with, the answer became evident.  It was all reminiscent of the shift to client-server computing from mainframe computing.  Clients had no where near the processing power of a mainframe, but they had some processing power and could connect to more powerful servers when they needed more resources for processing or storage.  In other words, mobile devices didn’t need to be powerful enough to run meaningful business apps, they just needed to be able to connect to something that could.  That something, in my mind, is Windows Azure.

In my opinion, Windows Phone  7 + Windows Azure  applications may be indicative of not just a shift to mobility, but rather a shift to the next generation client-server architecture.  Windows Azure can provide an powerful, scalable platform for the parts of mobile business applications that need to do the heavy lifting, and the mobile apps can provide the processing that is needed to localize and personalize the application or to locally manipulate whatever data have been retrieved from the cloud.  Windows Azure also, via SQL Azure,  provides tremendous storage capacity and analysis services not currently available in mobile devices.

The really funny part of all this is that my students are already doing something to this effect in our Imagine Cup project.  I just didn’t see it from the client-server perspective. The end result of all of this was that my faith was restored that mobility is something that MIS (as well as CS and IT) programs can’t afford to ignore, but also that a shift to mobility is going to open a lot of other possibilities for businesses to leverage IT for competitive advantage.  I even felt better about my Imagine Cup team’s project to boot…and that, my friends, is a good thing.

James is an assistant professor of management information systems at the University of Arkansas at Little Rock in Little Rock, AR.  He is also the president and chief consultant of InfoVenture Systems Consulting.


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA)

The Microsoft News Center reported OpenStack Is Now Open for Windows Server in a 10/22/2010 press release:

Microsoft Corp. today announced that it has partnered with Cloud.com to provide integration and support of Windows Server 2008 R2 Hyper-V to the OpenStack project, an open source cloud computing platform. The addition of Windows Server 2008 R2 Hyper-V provides organizations and service providers running a mix of Microsoft and non-Microsoft infrastructure with greater flexibility when using OpenStack.

As part of the collaboration, Microsoft will provide architectural and technical guidance to Cloud.com. Cloud.com in turn will develop the code to support OpenStack on Windows Server 2008 R2 Hyper-V. Once complete, the project code will be checked into the public code repository at http://openstack.org.

OpenStack uses open source software on standard hardware. Simply put, the software can run on an individual server in an existing datacenter or run on hardware configured as a modular datacenter. It uses virtualization technology to create and manage large groups of virtual machines.

The addition of Microsoft’s virtualization product puts customers in an excellent position to reach economies of scale to run Windows- and Linux-based infrastructure. Windows Server 2008 R2 Hyper-V can efficiently run multiple different operating systems in parallel.

“Support for Windows Server Hyper-V on OpenStack reinforces Microsoft’s commitment to delivering choice and flexibility to customers in the cloud,” said Ted MacLean, general manager for the Open Solutions Group at Microsoft. “Giving customers the option to use Microsoft’s enterprise-ready virtualization platform, Windows Server 2008 R2 Hyper-V, when they deploy OpenStack as their cloud solution is a win for all.”

“We’re extremely pleased to welcome Microsoft to the OpenStack community. Its contributions to the open cloud platform will expand the opportunities for customers, vendors and channel partners,” said Jim Curry, general manager of OpenStack and Chief Stacker.

“As the demand for cloud computing continues to grow throughout the industry, there is an increased demand from customers for support of their existing technologies, such as Windows Server 2008 R2 Hyper-V,” said Sheng Liang, CEO of Cloud.com. “Microsoft’s support for both the OpenStack project and Cloud.com’s CloudStack underscores its commitment to providing customers with technologies that promote interoperability and openness in the cloud ecosystem.”

OpenStack was launched with code contributions from Rackspace US Inc. and the NASA Nebula cloud platform. Today, OpenStack is supported by 35 software and hardware providers from across the IT industry.


Mary Jo Foley (@maryjofoley) reported OpenStack cloud platform to get Microsoft Hyper-V integration on 10/22/2010:

The OpenStack project — an open-source cloud-computing platform created by RackSpace, NASA and a growing list of partners — is getting some support from Microsoft.

Microsoft officials said on October 22 that they are partnering with Cloud.com to provide “integration and support” for Windows Server 2008 R2’s Hyper-V hypervisor with OpenStack. Cloud.com is one of the current OpenStack contributors and participants.

The arrangement is similar to other interoperability-focused partnerships Microsoft has done in the past to add Java, PHP and Eclipse support for Microsoft’s own Windows Azure platform, via which Microsoft provides architectural and technical guidance (and in some cases, money) and the partner does the code development. Microsoft officials said that once Cloud.com develops the supporting code, that code will be made available at http://openstack.org.

The OpenStack platform already includes virtualization technology. The code that is under development by Cloud.com will enable the OpenStack platform to integrate with Microsoft’s own server/virtualization products.

The OpenStack project announced its new “Austin” release of its Compute and Storage platforms on October 21. That release added support for the Xen, KVM, QEMU, User Mode Linux Support in the hypervisor space.

Rackspace has made Windows and Visual Studio integration technologies available to customers of its own cloud platform.


Derrick Harris posted Microsoft Joins OpenStack to Add Hyper-V Support to the GigaOM Structure blog on 10/22/2010:

image Yesterday, OpenStack became wholly available; today — in what could be considered a very big deal — Microsoft has joined the effort. Well, indirectly, at least.

image According to the official announcement, Microsoft will provide technical guidance and assistance to startup Cloud.com to add Hyper-V support to its CloudStack offering. Once completed, Cloud.com will “develop the code to support OpenStack on Windows Server 2008 R2 Hyper-V” and add it to the OpenStack code repository.

Microsoft Hyper-V support is huge for both Cloud.com and OpenStack because Hyper-V adoption is rising fast. In the fourth quarter of 2009, for example, IDC estimated that Hyper-V licenses rose by 215 percent, compared with 19 percent for VMware ESX. Increasingly, it appears that cloud-computing software providers wanting to lure customers will need to support Hyper-V. Cloud.com is riding a momentum wave after its big private-cloud installation at Korean telco KT, and Hyper-V support will only help.

For OpenStack, Hyper-V support could make an even bigger impact. Web hosts and MSPs have been driving spending on cloud software as they seek to upgrade their offerings, and they’re starting to realize that their VMware-only hypervisor offerings won’t cut it for much longer. A free, open-source, MSP-proven platform that supports Hyper-V, as well as XenServer and KVM, should be appealing.

On a personal note, I wrote in July (subscription required) that OpenStack will face an uphill battle to gain real traction, and I stand by that proposition. If anything, the competition has gotten stronger since then, especially with the introductions of VMware vCloud Datacenter and vCenter Director. However, OpenStack has been evolving furiously, and it looks stronger with each iteration.

Related content from GigaOM Pro (sub req’d):


Ellen Rubin claimed “We’ve written extensively about the benefits of hybrid clouds, since it’s a core part of our founding vision at CloudSwitch” as a preface to her Hybrid Clouds: Private vs. Public, Revisited post of 10/22/2010:

image We’ve written extensively about the benefits of hybrid clouds, since it’s a core part of our founding vision at CloudSwitch.  For most of this past year, the cloud market has been focused on defining the differences between public and private clouds and weighing the costs and benefits. Slowly the conversation has shifted to what we believe is the central axiom of cloud: it’s not all or nothing on-premise or in an external cloud; it’s the ability to federate across multiple pools of resources, matching application workloads to their most appropriate infrastructure environments.

image To reiterate some key thoughts we’ve written about in the past, the idea of hybrid clouds encompasses several use cases:

  • Using multiple clouds for different applications to match business needs. For example, Amazon or Rackspace could be used for applications that need large horizontal scale, and Savvis, Terremark or BlueLock for applications that need stronger SLAs and higher security. An internal cloud is another federation option for applications that need to live behind the corporate firewall.
  • Allocating different elements of an application to different environments, whether internal or external. For example, the compute tiers of an application could run in a cloud while accessing data stored internally as a security precaution (“application stretching”).
  • Moving an application to meet requirements at different stages in its lifecycle, whether between public clouds or back to the data center. For example, Amazon or Terremark's vCloud Express could be used for development, and when the application is ready for production it could move to Terremark's Enterprise Cloud or similar clouds. This is also important as applications move towards the end of their lifecycle, where they can be moved to lower-cost cloud infrastructure as their importance and duty-cycle patterns diminish.

CloudSwitch customers and prospects are clear that hybrid clouds are the way to go. Here are some examples of recent conversations:

“It’s going to take our internal IT group more than 18 months to build a private cloud; in the meantime we can use the public clouds now for on-demand capacity and scalability.” – VP of Business IT group at a large Wall Street firm

“We’re highly virtualized and we see external clouds as pools of virtualized resources that are available as extensions of our internal infrastructure.” – IT Director at a large healthcare company

“We have compliance data that will never leave our firewall but we like the idea of scaling out the computing resources in the cloud for peak periods.” – VP of Informatics at a large pharma

We’ve also been tracking some validation from more official sources on the growth of public clouds and the hybrid model. For example, a recent study by SandHill Group surveyed more than 500 IT executives and indicated that the biggest growth in cloud computing will be in hybrid clouds (from 13% now to 43% in three years). Another survey by Evans Data finds an even higher adoption rate among IT developers, suggesting that the hybrid cloud model is set to dominate the coming IT landscape.

It’s also interesting to see the importance of the hybrid model taking hold among industry insiders with many different perspectives. We saw this at VMworld 2010, where there was tremendous interest in hybrid clouds, from Paul Maritz’s keynote predicting a hybrid cloud future through many sessions and product announcements. Veteran cloud watcher James Urquhart points out that the hybrid approach lets you hedge your bets in cloud computing, using technology that allows you to decouple the application from the underlying infrastructure and move it to the right environment so you don’t get locked in. And even private cloud advocates acknowledge that hybrid has an essential role, where public cloud platforms serve as extensions of private cloud deployments.

It’s gratifying to see the CloudSwitch founding vision gain broad industry acceptance, with the hybrid model as key enabler for cloud computing. It’s even more satisfying to seeing the vision coming to life as more and more customers leverage our technology to run their applications effortlessly in the right environment, whether an internal data center, private cloud, or public cloud. Enterprise users and their companies are the real winners.


<Return to section navigation list> 

Cloud Security and Governance

W. Scott Blackmer posted A Privacy Checklist for Global Enterprises to the Info Law Group blog on 10/21/2010:

image Nymity, the international privacy consultancy, recently interviewed me about managing risk and compliance in a global enterprise that handles protected personal information about customers, employees, website visitors, and other individuals in multiple jurisdictions.  Based on experience with many multinationals, large and small, I came up with a discovery checklist that a company might find useful in identifying and prioritizing these data flows.  We also discussed several issues of common concern to global organizations:

  • enforcement and litigation trends
  • the moving target of "sensitive" data
  • the role of privacy commissions and other data protection authorities
  • the increasing interest of trade unions and works councils in employee privacy issues
  • the value of referring to information security standards
  • the practicalities of using cross-border compliance vehicles such as model contracts, Safe Harbor, and binding corporate rules.

imageThe full interview is available here.


<Return to section navigation list> 

Cloud Computing Events

Robert Mullins asserted “Sessions on Azure, new mobile platform featured at developer conference” as the deck for his Cloud, Phone 7 on tap at Microsoft PDC10 article of 10/23/2010 for Network World’s Microsoft Tech blog:

image Microsoft will be opening the doors of its corporate campus in Redmond Oct. 27-29 for its annual Professional Developers Conference at which it will be building momentum for its cloud computing initiatives and the development of applications for its new Windows Phone 7 operating system.

image

PDC10 will begin with a keynote Thursday by CEO Steve Ballmer, who will share the stage with Bob Muglia, president of the Server and Tools Business at Microsoft. In that role, Muglia is responsible for Microsoft’s infrastructure software, developer tools, and cloud platform, including products such as Windows Server, SQL Server, Visual Studio, System Center and the Windows Azure Platform. This means Muglia, along with Ballmer, will be pumping up software developer attendees about Microsoft’s “We’re all in” strategy on cloud computing, which Ballmer launched back in March.

image Sessions Thursday and Friday will cover how to build, deploy and manage applications running on Azure, which is the cloud version of Windows Server. Other sessions will be devoted to running Java applications on Azure, managing identity and access control in the cloud, building databases on SQL Azure and integrating SharePoint with Azure.

Blogger Tim FitzGerald reported today that we may hear more at PDC10 about Microsoft’s plans to deliver cloud computing as an Infrastructure-as-a-Service (IaaS) offering. FitzGerald, an executive at Avnet Technology Solutions, writes that Azure is currently available only as a Platform-as-a-Service (PaaS), but that another Microsoft executive, Zane Adams, general manager of Azure and Middleware Server and Tools Business, told attendees at a conference in the U.K. that Microsoft was expecting to be a player in IaaS, PaaS and software-as-a-service (SaaS). Positioning itself as a player in all three spaces would put it in competition with Amazon's EC2 (IaaS), Google (PaaS) and Salesforce.com (SaaS), which could be a lot to take on.

imageAnd Microsoft needs to woo the development community to tackle another highly competitive space in mobile with its coming Windows Phone 7 OS. Microsoft released the final version of its development tools for Phone 7 in September and the smartphones running it are to go on sale in the U.S. Nov. 8. Sessions on Phone 7 include one on how to use Azure to build Phone 7 apps that would be backed by scalable cloud components. Another one covers how to build apps that run on Silverlight. And a third session looks at building game applications using XNA Game Studio, which is the Microsoft integrated development environment for Xbox. And running Xbox games on a Phone 7 device is expected to distinguish Phone 7 devices from Android, Apple and BlackBerry devices.

See my Windows Azure, SQL Azure, AppFabric and OData Sessions at PDC 2010 post updated 10/22/2010 for a complete list of cloud-related sessions at PDC10.


•• Wes Yanaga asked Live in LA? Register for the Underground Powered by PDC10 on November 9 on 10/22/2010:

imageThe Underground powered by PDC10 is one of the hottest events of the year. Sponsored by Microsoft, this year’s event is focused on the Southern California technology and startup community.

The event is on Tuesday, November 9 from 6:00PM-10:00PM at Club Nokia at LA Live.

The night will open with a highly anticipated Q & A session with tech luminaries Dan’l Lewin, CVP for Strategic and Emerging Business Development at Microsoft and Blake Irving, Executive Vice President and Chief Product Officer at Yahoo! This session will be moderated by Jason Nazar, Co-Founder and CEO of Docstoc, a prominent startup in LA.

LA-based startups, including MobilePayUSA and CloudBasic, will be discussing how they leverage Microsoft’s technology to create cutting-edge applications and products. Emphasis will be on Windows Phone 7, other Microsoft technologies will be discussed.

Enjoy lots of free giveaways, delicious appetizers and hosted bar! This is an invitation-only event. 

Follow us on Twitter: @UndergroundPDC

For more information: http://undergroundpdc.com

Preregistration required, register here with this invitation RSVP Code: mkejed


Reuven Cohen (@ruv) announced CloudCamp Bogotá, Nov 13, 2010 to be held 11:00 to 17:00 at Calle 98 # 18-71 Piso 2,  Bogotá, Colombia:

About CloudCamp:

CloudCamp is an unconference where early adopters of Cloud Computing technologies exchange ideas. With the rapid change occurring in the industry, we need a place where we can meet to share our experiences, challenges and solutions. At CloudCamp, you are encouraged to share your thoughts in several open discussions, as we strive for the advancement of Cloud Computing. End users, IT professionals and vendors are all encouraged to participate.

Register for CloudCamp Bogotá, Nov 13, 2010

Fecha: 13 de Noviembre 2010 de 11:00 a 17:00

Lugar: HubBog http://www.hubbog.com

Direccion: Calle 98 # 18-71 Piso 2,  Bogotá, Colombia

Precio: Entrada libre.

Organizadores:

Juan David Gutierrez - juandavid.gutierrezreyes@gmail.com - jgutierrez@vivareal.com - @jdtato

Thomas Floracks - thomas@vivareal.com - @thomasfloracks - www.vivareal.com

Gilbert Guevara - gilbert.guevara@gmail.com - @gilbertguevara

Yosu Cadilla - yosu.cadilla@gmail.com - @_YC

Sponsors:

CloudCamp Bogotá es una excelente oportunidad para hacer que su empresa destaque como líder en el mercado Cloud Computing de Colombia.

Si desea que su empresa patrocine el evento póngase en contacto con cualquiera de los organizadores.

Agenda:
11:00pm Registration & Networking, Food
11:30pm Welcome, What is CloudCamp.

11:45pm Lightning Talks (5 minutes each)

- Por definir
- Por definir
- Por definir
- Por definir

12:15pm Unpanel 
12:45pm Organize the Unconference Sessions
13:00pm Unconference Session 1
13:45pm Food break and Networking
14:15pm Unconference Session 2
15:00pm Wrap-up Session
15:15pm Go out for drinks! (TBD)


• Keynote annnounced the Mobile Cloud Computing Forum to be held on 12/1/2010 at RIBA, London, UK:

Show Highlights include:

    • 1 day conference and exhibition on
      Enterprise Mobile Cloud Computing
      and Enterprise Apps
    • Attend and network or watch the
      event streamed LIVE online free
      of charge
    • Hear from leading case studies
      on how Cloud Mobility has been
      integrated into their working practices
    • Learn from the key players offering
      Mobile products and services
      Benefit from our pre-show online
      meeting planner
    • Network in our combined exhibition
      and catering area
    • Evening networking party for
      all attendees

Cory Fowler (@SyntaxC4) announced on 10/21/2010 Cloud Camp Toronto – October 26 to be held at the Metro Toronto Convention Centre from 5:30 PM to 9:30 PM EDT:

image If you’re interested in the Cloud Computing and would like to get an a good idea of what Cloud Providers there are out there (I’d suggest Windows Azure) and how people are Architecting their Cloud Applications, Cloud Camp Toronto is the place for you.

Photo Credit: Office Space Toronto

CloudCamp is an unconference where early adopters of Cloud Computing technologies exchange ideas. With the rapid change occurring in the industry, we need a place where we can meet to share our experiences, challenges and solutions. At CloudCamp, you are encouraged to share your thoughts in several open discussions, as we strive for the advancement of Cloud Computing. End users, IT professionals and vendors are all encouraged to participate.

image

Be sure to Register for the Event and I look forward to seeing you there!


Jim Nakashima Hope[s] to see you at PDC – live or online according to this 10/21/2010 post:

image Next week, I’ll be speaking at PDC.  The sessions were published recently and I’ll be speaking on Thursday at 11:30AM PST.

Building, Deploying, and Managing Windows Azure Applications

imageIn order to take full advantage of Windows Azure and SQL Azure, you need to know more than just how to write the code. You need to know how to incorporate your application in a team environment, deploy, monitor, manage and retrieve diagnostic information back from the cloud. In this session, you will learn everything you need to know to be successful with a project that utilizes Windows Azure and SQL Azure including setting up your development environment, automating build, unit test and deployments to different deployment environments from staging to production, and managing credentials and user roles using the Windows Azure Portal.

I’ll frame the talk a little differently.  My talk has 3 sections, Setting up the cloud, Deploying to the cloud and Viewing into the cloud and I’ll be covering a mix of what’s there today and some cool new features we have coming sometime before the end of the year.

What’s unique about this PDC is that all of the sessions will be broadcast live online at: http://player.microsoftpdc.com/ so if you couldn’t make it live, I hope that you can watch online and twitter along @jnakashima.

I’m pretty jazzed, looking forward to seeing you there – live or online.

See my Windows Azure, SQL Azure, AppFabric and OData Sessions at PDC 2010 post updated 10/22/2010 for a complete list of PDC10 sessions on a single page.


Laurent Duveau announced that he’s Speaking at TechDays Canada 2010 about Internet Explorer 9, Silverlight, and LightSwitch in a 10/21/2010 post:

image This year again I’ll be speaking at TechDays Canada!

I will take care of 3 talks in Montreal on November 23-24:

DEV302: A Lap around Windows Internet Explorer 9 for Developers
Wednesday, November 23, 12:30pm to 12:50pm

Internet Explorer 9 contains many new features that give developers many new options for building rich Web applications. From enhanced features like the developer tools or support for more DOM interactivity – Internet Explorer 9 is the browser you’ve been asking for. In this TurboTalk you will learn about these features and how you can take advantage of brand new APIs like HTML5, SVG, and Direct2D Graphics support.”

OPT217: Speeding up Silverlight development using 3rd Party Controls
Wednesday, November 24, 12:30pm to 12:50pm

“Learn how to cut Silverlight development time significantly using your new Telerik RadControls. As a TechDays attendee, you will receive a complimentary license for Telerik’s RadControls for Silverlight. This TurboTalk will demonstrate how you can speed up application development while adding more functionality to your Silverlight applications with the Telerik tools. See how high-performance data controls like RadGridView and RadChart can take your applications to the next level. See how layout controls like RadDocking and RadTileView can add both richness and increased functionality, helping you maximize screen real estate. And see how RadRichTextBox is unlocking Silverlight’s power to enable editing of HTML, DOCX, and XAML content. Jumpstart your development with the RadControls for Silverlight and get the most out of your new tools by joining this developer-to-developer talk.”

image22242Breakout: A Lap Around Visual Studio LightSwitch
Wednesday, November 24, 3:40pm to 4:45pm

LightSwitch is a newcomer in the Visual Studio suite and allows you to create Silverlight business applications with little or no code. Discover this tool via a demo and be amazed like I am, also you will learn how to customize and extend its capabilities. Wow effect guaranteed!”

Early bird price is expired, but if you plan to register contact me as I can give you a 50% off coupon.

See you there!


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Derrick Harris opined AWS Looks on Pace for That $500M in 2010 in this 10/24/2010 post to GigaOm’s Structure blog:

It was a good week for Amazon Web Services, and one for which the company can take complete and total responsibility. AWS announced a year of free service for new developers, added new features to make its Hadoop-based Elastic MapReduce product more useful, won a contract to provide cloud computing resources for the government’s Apps.gov service and, presumably, comprised an even greater portion of Amazon’s $7.56 billion third quarter.

As I write in my Weekly Update at GigaOM Pro, all this could matter a great deal when it comes to improving Amazon’s bottom line and AWS’s market share.

imageI think the Apps.gov win and the third-quarter results are intrinsically intertwined. Om wrote in August about the UBS report estimating AWS revenue at $500 million in 2010, out of roughly $900 million labeled “Other” on Amazon’s earnings sheet. In the third quarter, “Other” amounted for $240 million, bringing the to-date 2010 total to $632 million. Assuming UBS’s methodology is accurate, AWS will hit its $500 million mark.

image The fourth quarter is here, and the holiday spike in web traffic should mean a lot more money for AWS. Last year, its fourth quarter “Other” revenue amounted for $231 million of the $653 million annual total for “Other,” and represented a 42 percent increase over third-quarter “Other” revenues. A comparable spike this year would bring fourth-quarter “Other” revenue to $341 million, raising the 2010 total to $973 million. AWS’s portion of that could push toward $550 million.

Looking forward, there’s no indication that UBS took the government contract into consideration when projecting future revenues. The Obama Administration has been pushing cloud computing at every turn, and government contracts are notoriously lucrative. Apps.gov could prove to be a cash cow for AWS.

The other two announcements this week — a year of free AWS usage and adjustable job flows for Elastic MapReduce — underscore AWS’s top position among cloud providers. These announcements are about increasing the AWS user base by courting up-and-coming developer segments (Hadoop, for instance) and driving down prices. Neither are earth-shaking, but I don’t know of other cloud providers able to give away a free year (even if the total cost is minimal) or expend resources on a homegrown Hadoop offering.

Weeks like this remind us just how big AWS is in the world of cloud computing. Competitive providers like GoGrid, Joyent, Rackspace and VMware are advancing fast, as are projects like OpenStack, but they won’t steal IaaS revenue share or developers from AWS without a fight.

Read the full post here.


Todd Hoff posted Paper: Netflix’s Transition to High-Availability Storage Systems to the High Scalability Blog on 10/22/2010:

image In an audacious move for such an established property, Netflix is moving their website out of the comfort of their own datacenter and into the wilds of the Amazon cloud. This paper by Netflix's Siddharth “Sid” Anand, Netflix’s Transition to High-Availability Storage Systems, gives a detailed look at this transition and does a deep dive on SimpleDB best practices, focussing especially on techniques useful to those who are making the move from a RDBMS.

Sid is going to give a talk at QCon based on this paper and he would appreciate your feedback. So if you have any comments or thoughts please comment here or email Sid at r39132@hotmail.com or Twitter at @r39132 Here's the introduction from the paper:

Circa late 2008, Netflix had a single data center. This single data center raised a few concerns. As a single-point-of-failure (a.k.a. SPOF), it represented a liability – data center outages meant interruptions to service and negative customer impact. Additionally, with growth in both streaming adoption and subscription levels, Netflix would soon outgrow this data center -- we foresaw an immediate need for more power, better cooling, more space, and more hardware.
One option was to build more data centers. Aside from high upfront costs, this endeavor would likely tie up key engineering resources in data center scale out activities, making them unavailable for new product initiatives. Additionally, we recognized the management of multiple data centers to be a complex task. Building out and managing multiple data centers seemed a risky distraction.

Rather than embarking on this path, we chose a more radical one. We decided to leverage one of the leading IAAS (a.k.a. Infrastructure-As-A-Service) offerings at the time, Amazon Web Services (a.k.a. AWS). With multiple, large data centers already in operation and multiple levels of redundancy in various web services (e.g. S3 and SimpleDB), AWS promised better availability and scalability in a relatively short amount of time.

By migrating various network and back-end operations to a 3rd party cloud provider, Netflix chose to focuson its core competency: to deliver movies and TV shows.

Todd continues with some of the questions he had for Sid, accompanied by Sid's responses.


James Governor reported about VMware CEO [Paul Maritz]: Django, Rails, Open Frameworks, Packaged Apps as Commodity and The New KingMakers in this 10/22/2010 post to the RedMonks’ Monkchips blog:

Paul Maritz, CEO of VMWare

I was in Copenhagen last week for VMworld Europe 2010. Monday was an analyst briefing so I wasn’t particularly surprised when VMware CEO Paul Maritz spoke at length about his strategy to attract developers to VMware as a platform.  After all most analysts are curious about what happens next.

But seeing Maritz give the same speech to his core VMware audience the next day was impressive. After all – the traditional VMware customer is very much on the ops side of the house – they are neither line of business nor application development. These folks are just not that interested in app dev- or more accurately tend to have a pretty adversarial relationship to the development side of the house. Application development means change, and enterprise ops folks don’t like change, because change can break things. So to see Maritz on a tear about application development was impressive.

image If he made the same speech at SpringOne this week it wouldn’t be out of place at all. That’s right. SpringOne – in case you missed it VMware recently acquired SpringSource- the enterprise Java company. Sadly I couldn’t make the show but I will be following up because I know there was significant news waiting to drop at the event. [My spar @cote captures the big Spring picture in quick and dirty fashion here- its a very good read. ]

Maritz said that other major tech firms were still “consolidating the client/server stack” while VMware wanted to capture a new wave of application development.

“Developers are moving to Django and Rails. Developers like to focus on what’s important to them. Open frameworks are the foundation for new enterprise application development going forward. By and large developers no longer write windows or Linux apps. Rails developers don’t care about the OS – they’re more interested in data models and how to construct the UI. Those are the things developers are focusing on now. The OS will fade into the background and become one of many pieces. We plan to do the best job of supporting these frameworks.”

Or as he said to the analysts:

“Our goal is to become the home of open source and open framework-based development”.

[contextual digression here: In case you didn’t know, Ruby On Rails, the framework for building web apps invented by David Heinemeier Hansson continues to be about as popular with web developers as Apple Macbooks, which is to say, very popular indeed. If you want good-looking data-driven apps Rails is a really good place to start. Frankly though, hearing Maritz name check Django was more surprising - the framework bills itself as “The Web framework for perfectionists with deadlines” but is not nearly as well known. There is a wave of content-driven application development building, and Maritz is evidently hip to that. Adobe acquired Day Software, which is playing in that space. This week Alfresco at its developer conference pushed the message that content applications was all about the web, rather that traditional Enterprise Content Management. I met with Eric Barocca of Nuxeo last week and he is extremely excited by the new developer-driven content management apps he is seeing emerge. Nuxeo had originally been positioned as an application – but now its very much a platform to sell to architects, rather than slideware purchasers. Eric said his goal is to become a platform to integrate innovation happening in open source content management. So VMware certainly isn’t alone. Maritz is evidently just the highest profile executive to really grok what’s happening. Two key standards seem to driving all this content management integration goodness- CMIS and OSGi. I should also strongly credit our client the Apache Software Foundation for providing governance for a lot of this open source code/innovation.] But back to VMware.

Given RedMonk is all about developers you can imagine I loved the keynote. Thought I doubt many people in the audience had a bloody clue what Django is. When @sogrady came online later that day he immediately asked on twitter:

“Maritz really said that?”

it’s one thing to rubbish the operating system, but what about packaged apps? At the analyst briefing Maritz made it very clear indeed where he thinks competitive advantage lies.

In the final analysis they [purchasers] are not the people making strategic decisions in the business. Developers have always been at the leading edge, because that’s where business value is generated. Things that don’t differentiate you at a higher level will be SaaS apps – which will also be purchased at a higher level. The differentiated stuff you have to do yourself, and that means software development”.

In other words Maritz has pretty much the same core thesis as RedMonk: Developers are not an overhead – they are the new Kingmakers. I have to say I was pretty stunned. I still am. After years being an outlier RedMonk now has chief executives on side. And lets look at the folks Maritz has hired. Tod Neilsen is on the management team- he is the guy that built up the Microsoft Developer Network (MSDN) – you might have heard of it. Maritz himself is ex-Microsoft, and clearly knows developers. Paul Lukovsky is another ex-Microsoft guy in the fold – he understands what makes a successful API as well as anyone on the planet. Apparently back room enforcer Charles Fitzgerald is also in the mix.

That VMware can hire from Microsoft is not surprising. But poaching outstanding talent from Google shows the level of ambition, aggression and resources that VMware will throw into becoming a leader in application development, not just operations.

Meanwhile the SpringSource assets are already great basis for a solid developer story. Rod Johnson, who runs that part of the business for Maritz, is as smart a strategist as he is a technologist – and he is a scary good technologist – with Spring he managed enterprise Java development far less painful. He is a purist about developer flow- which is why he outsourced development of the Spring IDE to Mik Kerstyn of Tasktop. Kertsten’s company also contracted with Spring to develop the new “Cloud ALM” platform announced this week. Spring ROO is a Rails-influenced environment for building Java apps, while Spring is also home to Groovy, the dynamic language. Another intriguing developer play in the Spring arsenal comes in the shape of RabbitMQ – a lightweight scalable message queue system popping up all over the place. Developers like it, and messaging is going to transform the web into a more a event-driven transactional model. I list these technologies because you may not know about VMware’s assets in the space. (Please check out further coverage of VMware SpringSource here, here and here.)

You can’t buy competitive advantage, but you can build it. That’s VMware’s line. I’d love to see Nicholas “IT Doesn’t Matter” Carr and Maritz in a debate.

SpringSource is a client, Microsoft is a client. Google is not a client. Alfresco is not a client (but i should really sort that out).

The photo of Maritz, capturing something of his brooding nature, is by Robert Scoble. Please click on the link to see more of his flickr portfolio.


Michael Coté (@cote) posted SpringOne 2010 – the shod[d]y trip-report – Quick Analysis to the Enterprise Irregulars blog on 10/21/2010:

Rod and Mik present

Too much travel makes your brain into pudding. It’s like being hung over, except you didn’t have that nice experience of being drunk. As I work towards a more serious write-up of SpringOne, here are some quick notes a la Voltaire.

image Earlier this week, I was at VMWare’s SpringOne conference, covering announcements and new work from their SpringSource division. They launched an integrating cloud-based software development suite of tools, several technology partnerships with Google, and started to outline new integration needs from the social, mobile, and database world.

(For an excellent, detailed summary, see Darryl Taft’s write-up of day one and then day two.)

Code2Cloud

The release of Code2Cloud is the most interesting announcement. Working with TaskTop – mostly TaskTop it seems – VMWare (or “SpringSource,” as I’ll call them) has put together a good looking approach to doing cloud-based ALM. They of course want to move away from the idea of “ALM” (good council, and one of those Three Letter o’ Death that our own James Governor has told people in the past to avoid) but you’ll pardon me using a known quantity as a crutch to talk about “all that stuff other than the IDE and complier that you use to get software out the door and manage the project.”

Since leaving the programming world, I’ve been envious of teams who could use hosted services like Rally and VersionOne, along with the host of other, well, hosted tools. Those tools tend to manage the artifacts of an Agile process: tracking the “stories” and features that should be built, what phase of the development cycles they fit in (the “iteration”), who’s working on the item, and how far along it is up to completion.

In addition to that issue tracker, there’s version control and your build system. Both of those have been ripe for moving into the cloud, and the ability git provides to do synchronized web style use in git (meaning: you can pile up changes and even use the tool offline) takes care of the unreliable cloud problem most people would probably carp on.

Send moar minis

(“Send moar minis” from Christopher Blizzard)

Cloud-based builds are an area that many people, and stealthy startups, have been interested in over the past few months. It makes sense: builds are processor intensive and, as such, a perfect match for the “bursty,” elastic functionality a cloud would provide. Also, as one of Sun’s hosted projects was shooting for, cloud-based builds open up all sorts of platform testing and compiling options: maybe it’s easier to compile to some weird HP-UX version if you just rent that node as part of your build cloud.

The tough nut here will be convincing developers that this system is as open as TaskTop and SpringSource would tell you it is. While SpringSource obviously wants you to use their stack top to bottom, Code2Cloud properties to be an open and interoperable pile of components. Thus, even if you weren’t using Spring, you could use it. One Spring use I talked with said he liked the setup, but didn’t use Spring’s Tools so he wouldn’t be able to use it. And, besides, he said, he already had all that ALM stuff setup. That’s the kind of quick perception to get over by showing, not marketing, as it were. We’ll see once it comes out, sometime next year they say.

I have a short, video interview with TaskTop on Code2Cloud that should be up soon. Also check out Israel Gat’s take: as someone who’s interested in optimizing the development process from the perspective of management, he’s esp. interested in the ALM-we-shall-not-call-ALM innovation here.

Google Partnership

Also of interest were the three integrations that Google and SpringSource announced.

Sorting out Google’s angle on the developer-front can be a freaky walk down memory lane. At the end of the day, the reason they do most things is “to make the web a better place.” The more people that use and enjoy the web, the more Google ads they see, the more clicks there will be, and the more money is made, and layering in the collection of data that allows Google to better do that targeting and connivence advertisers that they’ve finally solved Wanamaker 50% waste rule of advertising, and you’re set.

When the stranger lady dumps billions of dollars on your desk each quarter, you don’t worry too much about direct revenue producing products on a quarterly basis. Hence the feeling all too often that Google isn’t operating under the same strategic pressures as other companies.

That’s all to say, when you look at partnerships Google does, you can’t always look at it through the same lense you would other companies. You almost have to take on that cutely, starry eyed attitude most Googleers have: we just thought it was a good idea and Larry and Sergey agreed!

To the Google/SpringSource announcement, then. The first two items – integrating Roo and GWT can be taken at face value as just a “good idea.” GWT has been successful, and it’s certainly one of the UI toolkits I about (Java) hear people using frequently.

On Spring Insight

Integrating together Spring Insight and Google Speed Tracer is half “just a good idea,” but also ties up with Google’s enterprise cloud strategy, AppEngine…


John Treadway (@CloudBzz) concluded HP Cloud Strategy? No So Much… in this 10/22/2010 post:

image At Interop this week I met with Doug Oathout, VP of Converged Infrastructure at HP.  It’s often been very frustrating trying to figure out if HP really has a cloud strategy and is poised to compete in this market.  While nobody would claim that HP is delivering any clarity on cloud right now, it sounds like they might be moving down the path a bit and a more comprehensive strategy might someday emerge.

What Doug talked about first was the economic value of a converged infrastructure (naturally).  In this regards they are positioning against Cisco and the broader VCE Coalition with particular emphasis on openness vs. the more prescriptive VCE approach (any hypervisor vs. VMware only, automation tooling that crosses into legacy environments, etc.).  Cisco might say that the downside of supporting that level of openness is complexity and increased cost.  We’ll let them duke that out but it’s clear that a market that used to be fragmented (storage, servers, networking, etc. sold by different vendors and integrated at the customer) has tilted towards more integrated and verticalized infrastructures that result in far fewer components and much less work to deploy.  I had to wonder if there was an opportunity for someone to do the same thing with commodity gear targeting the mass-market service provider space.

As for cloud offerings, there seem to be only three at the moment (at least that I was able to learn about in this meeting).

The first is private clouds built from their Matrix converged infrastructure and Cloud Service Automation (CSA) tools bundle (an integrated set of Opsware and other tools).  I guess I’d characterize this as IBM’s CloudBurst circa 2009 and Unisys’ Secure Private Cloud, but with a weaker story on cloudy capabilities such support for multi-tenancy, scaling out and more.  It’s the “cloud-in-a-box” approach.

Their second cloud offering is a quick-start service (“CloudStart“) to roll out a simple “cloud in a box” solution on customer premise in 30 days. Obviously that’s kind of a bunch of hype because the process changes, integrations etc. you need to do to really drive value out of an enterprise cloud program take many months of deep effort.

Their third area is not really a defined offering.  They are doing services around some other cloud technologies, most notably Eucalyptus.  This is natural given the deficiencies in cloud functionality with their CSA-based approach.

Notably absent are any offerings out of their former EDS managed services unit.  Doug mentioned a Matrix Online offering for standing up short-term infrastructure blocks for testing purposes, but it’s not a cloud, isn’t multi-tenant even, and requires HP labor to do the provisioning.  Like I said, not a cloud (if it even exists – can’t find it on the HP site)

Meanwhile, it seems like IBM is not putting as much emphasis on the CloudBurst approach anymore, instead focusing on their Smart Business Development & Test public cloud offering.  Sources tell me that this offering is doing quite well and several months ago there were tweets about them having run out of capacity.  HP currently has no such offering.

The takeaway for me was that HP is making inching progress in a couple areas of their business, but no discernible progress on driving a delivering a comprehensive, aligned and compelling enterprise cloud story to the market.  Looks like we’ll be waiting for a bit longer…

I’m underwhelmed, too. I wonder what happened to HP’s bigtime partnership with Microsoft for private cloud computing?


Mary Jo Foley (@maryjofoley) asked Amazon offers free entry-level Web services pricing. What will Microsoft do? in a 10/21/2010 post to ZDNet’s All About Microsoft blog:

image On October 21, Amazon.com announced a new, free entry-level tier for new Amazon Web Services (AWS) customers.

Is Microsoft going to retaliate, I wondered. Perhaps.

image Amazon announced that new users will will be able to run a free Amazon EC2 instance for a year, “while also leveraging a new free usage tier for Amazon S3, Amazon Elastic Block Store, Amazon Elastic Load Balancing, and AWS data transfer.” The company is calling the new offering its free usage tier.

In January 2010, Microsoft asked developers for feedback on what they wanted to see in Windows Azure. The current and potential Azure developers spoke: Lower pricing.

The No. 1 suggestion was: “Make it less expensive to run my very small service on Windows Azure.” The No. 2 suggestion, in terms of votes, was also pricing related; It’s continue to offer Azure free to developers.

Microsoft subsequently created introductory offers for its Azure cloud services to entice new users to kick the tires of its cloud offerings.

I asked Microsoft officials whether they had additional plans to counter Amazon’s new offer. A spokesperson sent the following response:

“It’s important to provide developers everywhere with broad access to try out the Windows Azure platform and begin to explore the kinds of innovative applications they can build. Stay tuned for more updates at PDC.”

Microsoft’s PDC 2010, or Professional Developers Conference, is happening next week — on October 28 and 29 — in Redmond, Wash. Microsoft is focusing primarily on Windows Azure, Windows Phone 7 and the future of programming languages, according to the session list for the conference, which Microsoft posted this week.


<Return to section navigation list> 

0 comments: