Cloud500 List – Countdown Started!

posted this on .

Cloud500 list image REVISED

New platforms for managing Cloud benchmarks are being announced with increasingly more features.  Performance is becoming a major force of innovation across enterprises and Information as a Service (IaaS) providers of all sizes.

The Cloud500 initiative is interested in developing metrics to enable comparability among such platforms.increasingly more features. Performance is becoming a major force of innovation across enterprises and Information as a Service (IaaS) providers of all sizes.

The goal of this project is simple and straightforward: Rank the 500 most powerful commercially available Infrastructure as a Service (IaaS) Cloud providers in the world and report those findings in a unified measurable method, ‘apples to apples’ comparison.

The CloudPak Test Suite supports 3 main test categories: System, RAM, and CPU. The test suite includes the following benchmarks:

UnixBench: provides a basic indicator of the performance of a Unix-like system. Hence, multiple tests are run to test various aspects of the system’s performance.

Geekbench: CPU Integer test suite

Apachebench: measures how many requests per second a given Apache server can sustain when carrying out 700,000 requests with 100 requests being carried out concurrently.

The full list of the tests will be available on Cloud500 website shortly. The Cloud Advisory Council will select cloud providers to run the tests.

The inaugural list will be released at a cloud event in September 2014. Details about the announcement will follow in the coming months. Real-time data will include a breakdown based on small, medium and large cloud offering packages offered by each of the cloud providers profiled.

To join the project and receive more information, please fill out and submit the form. Please note all fields are required.

Healthcare revolutionized by cloud computing

posted this on .

Typically when we talk about healthcare IT, the concerns are privacy and security – two of the major parameters of HIPAA compliance However, data isn’t just important to protect. The way the data is measured, stored, and accessed can be a game changer for various medical issues and millions of patients.

Cloud computing has special applications for the field of medicine. Let’s look at how cloud computing is revolutionizing various aspects of healthcare.

Broad implications for medical research

Cloud computing is making it simpler and more cost-effective to find new treatments of diseases  Because of the way the cloud is structured, with extensive resources available as needed, it’s optimized for crunching large pools of numbers to process and understand big data.

Dr. Michael Cunningham, a physician in Seattle, has experienced the significance of cloud computing to target various conditions. His particular point of focus is craniosynostosis, a health problem experienced by young children in which the skull has unified prior to it being safe (because the brain is still growing rapidly). In patients with the disease, the brain is trapped inside a tight spot, which can lead to a host of additional medical difficulties.

Dr. Cunningham and others in his field have been convinced that with the disease, the cells of the bone are not transferring messages to each other properly. Better data, they realized, would mean a big step forward for those searching for a cure. With cloud computing, they found it. Using a massive pool of cloud-based data, the study authors were able to determine commonalities between certain individuals experiencing the disease. The cost and time-consuming nature of the task prior to cloud computing may have meant that study would never have taken place.

Mobile apps for preventative health & monitoring

Dr. Eric Topol recently reviewed a number of types of health-related mobile apps that have become possible in the age of the cloud Smartphone apps that provide a monitoring or preventative function cover a broad spectrum.

Simple tasks can be completed through mobile apps, such as a cardiogram or checks of blood pressure and blood sugar levels. Those capabilities are impressive, but they only scratch the service of what is available. One app, for example, monitors to determine whether a person might have sleep apnea, while another checks posture and your ratio of standing versus sitting.

A few of the major healthcare monitoring apps are as follows:

·         Cardiogram: ECG Check and AliveCor both determine any potential incidents of arrhythmia.

·         Glucose: Withings and iHealth  are two choices to monitor blood sugar, through the use of a fingerstick, so that a separate glucose reading device is unnecessary.

·         Sleep apnea: Through an extension that connects to your finger to check your oxygen and heart rate throughout the night, an app made by Masimo allows a reasonable idea of sleep apnea diagnosis and continued measuring.

Much more is to come, of course. One example is an app called iBG that will get rid of the need for fingersticks for blood sugar monitoring. The FDA is currently reviewing that app to determine if it meets regulatory guidelines. The government’s role in cloud-based mobile healthcare is noteworthy (as discussed below).

Another app that’s particularly interesting and not yet available on the consumer market is one that makes microfluidics possible with an extension piece. Essentially consumers would be able to conduct various types of tests that until now have been conducted in labs. It’s not yet possible to do a complete DNA analysis, but it is possible to check the genotype to reveal possible sensitivities to medications. Function tests of various organs, such as the kidneys and liver, are also possible using the system.

Various apps are being developed that target mental/emotional disorders, help to prevent asthma attacks, etc.. The list goes on as this market continues to expand.

FDA now involved with mobile app review

In the case of healthcare, cloud-based applications aren’t just exciting. They could actually help save lives. However, the FDA oversees any medical procedures and devices, which now includes mobile apps. Final guidance specific to apps was released in September

The FDA will not be checking every medical application but only those that it sees as potentially dangerous to the public; the codes of the Federal Drug & Cosmetic Act will not be enforced across the board. Instead, the FDA is focused specifically on apps that do one of two things to a smart phone:

·         allow it to operate as a medical tool, such as some of the cardiogram apps

·         allow it to operate as an extension to a medical tool, such as diagnostic imaging through a cloud-based picture archiving and communication system (PACS).

Certainly the oversight by the FDA will slow down the rate at which apps will be released onto the market, but it can be assumed it will be less likely that a poorly made system gets into the hands of consumers.


The cloud is certainly changing what’s possible, allowing for better health and greater freedom for those who need to routinely monitor their levels. Whether you have any of these health conditions or not, you may be interested in testing out some of the apps listed above if you work for a healthcare company. It’s another great way to integrate the business of healthcare with the digital revolution.


By Brett Haines of HIPAA-compliant hosting provider Atlantic.Net

OpenStack cloud creates Best of Breed 2.0

posted this on .

Once upon a time, Unix vendors called themselves  “Open Systems” to establish free choice as a buying criterion. “Best of Breed” was what they often called it: why buy everything from one outfit whose innovation had been been subverted by inbreeding, when you could pick what you liked from different specialty vendors?

Sadly, “Best of Breed” often turned out to be science fiction at deployment time. Some incompatibilities were purely the product of seemingly infinite untestable combinations. And some vendors jockeyed for advantage by locking out their competitors. That monopolization worked for Intel, and it seems to be working for Apple (for now). Everyone else? Not so much.

IT departments swelled with specialists in different technologies; fiefdoms emerged, some complete with moats and turrets.  To this day, IT departments are cluttered with incompatibilities and stovepipes. The headaches are often big enough to rekindle the appetite for monolithic solutions.

Linux restored some respect to “open”. But the dominant open source business model, “open core”, created a two-class system: one for developers, and one for people who needed the software to work.

OpenStack open source cloud changes that, for two reasons. First, collaboration is not just about “naked code.” Common repositories, IRC chats, public reviews, and the fixed semi-annual release model create structure and rigor. The toolsets for check-in and continuous integration/continuous deployment (CI/CD) streamline testing and the accelerate find/fix cycle by orders of magnitude.

The second reason is the nature of the cloud itself. Developer productivity gains, rapid release cycles, and operational efficiencies create tremendous growth potential — but only if the constituent components of networking, compute, security, storage, and applications can be made to work together. Increasingly, code that runs in production is close, if not identical, to what the developers work on.

IT vendors recognize that harnessing that potential means coming to terms with the lock-in based incompatibilities that have choked the life out of IT organizations everywhere. Cloud can change that, because it provides a compelling new outlet for innovation in enterprise technology.

The critical difference, this time around, is that the OpenStack can provide the common, transparent platform through which diverse IT vendors can expose innovation to enterprise customers. Free of old lock-ins or monopolies, OpenStack forces the players to work together on behalf of customer value. And working together means customers really can choose from best of breed.

Cloud, HPC And Open Technologies Converge To Fuel Research, Innovation

posted this on .

Recognizing the potential role of cloud-based open computing technologies to the research community, a group of 30 key stakeholders and decision-makers from academia and industry got together last week to share their views on how open computing solutions can best support existing and emerging use cases in a range of research disciplines and high performance workloads. The event was hosted by Argonne National Labs and jointly sponsored by Notre Dame, Internet2 and Rackspace; with attendees representing 20 organizations including participants from top research institutes, major research universities and key industry partners that provide technology supporting the research community.

The group discussed how big data and high-performance computing can introduce new challenges and new frustrations. Say you’re an academic or researcher who needs time on a supercomputer; oftentimes you’ll have to wait months to get approved, and even then you likely only get a limited window. So if something with your software is not working at that time, you’re out of luck.

The cloud changes the computing equation and redefines the experience and service by adding on-demand, utility and self-service capabilities to computing infrastructure. The cloud is quickly evolving into a premium model for scientific computation and big data, and the face of high-performance computing is changing faster than ever. We’ve seen the change happen over the past couple of years, as open technologies like the Open Compute Project and OpenStack, in particular, democratize access to mass commodity hardware and software.  Now, top research institutes such as CERN, Argonne National Laboratory, Notre Dame, University of Texas at San Antonio and MIT have chosen to build their high-performance clouds on OpenStack. By embracing open standards and collaboration, university researchers are at the forefront of innovation and contribute to a shared purpose that benefits everyone.

The event organizers – Narayan Desai (Argonne National Lab) , Paul Brenner (Notre Dame University), Khalil Yazdi (Internet2)  and Paul Rad (Rackspace) – welcomed participants and laid out a vision and an agenda for the workshop.

In the afternoon sessions,  They shared lessons learned and presented the findings and gaps that point the way forward for compute- and data-intensive applications.

At the end of the session, the community identified two immediate incubation projects (with several other possible projects noted):

  • Data Reachback for Cloud Bursting Scientific Applications such as high energy physics led by Notre Dame, Internet2, Rackspace, UTSA, MIT, and Cycle Computing
  • Big Data Scale out storage architecture led by Argonne, University of Chicago and Nimbus Services

The teams are planning to develop blue prints, detailed service descriptions and plans for a continuing collaborative effort and identifying regular communication channels for these projects. They will likely get back together at supercomputing 2013 and WCSC 2013 in San Antonio Texas.

Link to full blog:

Paul Rad is the Cloud Advisory Council Open Cloud High Performance Application Group Chair

Cloud Advisory Council One-Year Anniversary: Mission & Overview

posted this on .

Posted on August 14, 2013 by Atlantic.Net


In this article, as the Cloud Advisory Council approaches its one-year anniversary, we will explore the role of the organization pertaining to cloud computing. Essentially, it is a number of hosting, datacenter, and other web infrastructure and high-tech organizations and professionals working together to make cloud computing as predictable and secure as possible (similarly to the Cloud Security Alliance). In this Blog, we will look at what the organization is, along with its membership, published content, events, and a sample issue targeted by the CAC.

Continue reading here

Women’s Sweater – Pink Gray

posted this on .

This (bold sweater) will help to give some much needed color to any woman’s winter clothing collection. The bright orange stripes alternate with soft gray ones, and the combination provides a stylish balance between standing out in a crowd and the desire for feminine appeal. Although dark colors are often worn during the winter months, this sweater provides a fresh burst of brightness that will lend a more cheerful outlook during even the most dull of winter days. In addition to its fashion statement, the sweater is designed to be comfortable. The warmth of the fabric will reduce the affect of a chilly day. Likewise, the round neck fits easily around a woman’s neck for enhanced comfort. Pairing the sweater with a long chain or colorful pendant is an easy way to allow the sweater’s wearer to make an even more dramatic appearance. Both dress slacks and comfortable jeans are appropriate companion pieces for this sweater. You can buy this sweater from (dresshead)

Bringing an Open Source Cloud to Enterprise IT Specialists

posted this on .

As most IT specialists are only well aware, the enterprise IT world is changing at a pace that is almost unprecedented. From security, to mobile to BYOD, IT managers are facing a blizzard of  challenges that were barely on the roadmaps of most CIOs even five years ago. Of all the changes occurring in the enterprise, a very good case could be made that the rapid and widespread adoption of cloud computing is the biggest change of all.

One of the factors contributing to the acceleration of cloud adoption is another well-known IT change accelerator: open source. OpenStack, the open source cloud operating system, was pioneered by RackSpace and NASA, when they set out to build a uniform set of service-based interfaces that would manage and control key compute resources in the cloud – namely: processing, storage, networking and authentication, with integrated management. In other words, Infrastructure as a Service, or IaaS. 

Now, IaaS is not new. Amazon Web Services (AWS) set out to do the same thing as OpenStack has since then, but with one catch. As a single company in control of implementation, AWS Elastic Compute Cloud (EC2) pioneered great work at scale that’s legitimized the cloud model and empowered developers as equal partners in infrastructure and distributed apps. They also succeeded in quickening some of the pulse at Vmware, arguably the other essential predecessor to OpenStack. But they did it within a closed architecture, that they controlled themselves top to bottom and end to end. 

The most compelling difference with either of these predecessors is that accelerator: OpenStack set out to do it in open source. The reason I say so is this: code transparency in the increasingly complex compute world isn’t a convenience, it’s a must-have. There’s just no other practical way to bring the myriad use cases required by the endless variety application workloads, nor is there any more practical way to get technology innovators – commercial, government, and academic – to collaborate on robust, peer-reviewed, meritocracy-driven code. What’s more, if you have a use case no one has thought of yet, you can take the code and bend it to your will. You don’t have to wait for some vendor to account for your needs someday in their roadmap.

When you combine the code transparency with a uniform set of interfaces for allocating and controlling resources in the cloud, you get a very powerful, versatile, and very extensible platform. Spinning up virtual machines, allocating storage, deploying images, controlling network and security access, all come from a single set of common, well-publicized interfaces. Compatibility is policed by market players with an interest in making the technology interoperate. Unlike the standards of the past, those standards are not passive engineering exercises: they are working code that needs to march through a pretty rigorous set of regressions for every change, dozens of changes per day. (Of course, even the regression test gating framework, called Zuul, is also open source).

Before OpenStack (and Amazon, to be clear), spinning up machines and storage, networking and managing them remotely required a menagerie of specific, often incompatible interfaces from different companies. It led to tremendous waste, both in the amount of cycles required to manage the menagerie, as well as poor utilization. Vmware made a huge contribution, by enabling consolidation to improve utilization – more OS’s, fewer machines.

But the most painful inefficiency was that change was slow and painful, and IT organizations built up huge backlogs of dated applications. Vmware’s consolidation efficiencies did little to address that. By the time developers and system administrators could agree on how to tune a box and its app, everyone was exhausted, and the last thing anyone wanted to do was change it. Who wants to re-install Oracle and build out its data storage any more often than is absolutely necessary?

The Infrastructure-as-a-Service (IaaS) model pioneered by Amazon, and open-sourced by OpenStack, changed all that. It enabled developers to control distributed infrastructure resources directly. Developers rapidly grasped the benefits of truly distributed applications, and built out tremendous innovations, at companies such as PayPal, Webex, Netflix, SalesForce and more. These scale-out applications change rapidly, and treat infrastructure as a fluid resource. Contrast this with the classic approach, where each server is a special, custom-crafted creature, like a pet. It becomes the application’s whole world, and admins give the server a name, stay up late nights when it becomes sick, trick it out with accessories. In the cloud, apps treat your servers like cattle: give ‘em numbers, and if they get sick, shoot ‘em and eat ‘em. One board member of the OpenStack Foundation, Randy Bias, describes this as the ‘pets-vs.-cattle‘ analogy.

However, running on Amazon means you get none of the benefits of optimizing your infrastructure to your business needs – which means utilization, compliance, and not least, competitive advantage. The API set for EC2, and all the permutations of the EC2 platform, are controlled by Amazon. Need something that’s not on their road map? Too bad. Got internal systems – such as ACLs/directory, etc. – that you need to integrate with new distributed apps running in the cloud? Only if Amazon thought of that first. Need to tune your application down to the metal iteratively, with no clear way forward other than test and reconfigure? Not so much.

In a nutshell, working with Amazon is like paying rent on an apartment. Sure, it’s a convenient way to start, and you can call someone else to fix the faucet – unless you can and need to fix it yourself. You give up control. With OpenStack, you get full ownership and transparency from top to bottom, and you can change your cloud to your competitive advantage.

The Foundation for OpenStack

The OpenStack foundation itself was originally incubated by Rackspace, and was formed upon the introduction of the code from the original joint project with NASA — today, July 19, 2010. The code was contributed to the OpenStack foundation, which incubated it and opened for business in the fall of 2012, with the support of a couple-hundred member companies.

There’s a 3-part governance structure: The Technical committee is comprised of 13 peer-elected leads who set the direction of the software development and code. The foundation Board of Directors of is made up of 24 members, including the 8 platinum sponsor companies, 8 of the 13 gold sponsor companies, and 8 individual members elected from at large among the users. The 3d leg of governance is the User Committee, which is made up of nearly 6000 users worldwide. Mirantis is a Gold Sponsor and an elected member of the Board of Directors.

What’s noteworth about the 200 companies who are members of the foundation is that they represent a pretty complete cross-section of everyone who has a stake in the future of cloud computing. IBM, whom you mentioned, is a Platinum Sponsor; as are the top 3 Linux distributors, Red Hat, Canonical, SUSE; plus Nebula, Rackspace, HP, and AT&T. Add corporate sponsorships from Cisco, Dell, Ericsson, Vmware, Juniper, Yahoo, adding up to about 200 supporting organizations  — a pretty impressive collection of logos.

The logo collection isn’t what makes OpenStack work; rather, it’s the release and contribution model. OpenStack is not a single project like Linux; it’s a set of projects under a common framework. What’s more, the projects release together on a single, integrated timeline, every 6 months, avoiding a lot of the churn you seen in other open source communities. There’s a design summit every six months to hammer out the roadmap for the next release. The most recent complete release is called “Grizzly”; in the fall, we’ll see the “Havana” release come out.

The best way to gauge the health of the project is to look at who’s contributing and the activity level. Mirantis recently released a dashboard, called, which shows who’s contributed what to which OpenStack project over a given timeline, both by individuals and companies.

Getting Started with OpenStack

Step one should be intuitive. First, get smart about OpenStack. You’ll ask better questions and you’ll be better able to get help from the community resources. In that, the first thing you really must do is check out the OpenStack site at Pick a project that’s of interest to you, and drill down. Another way is to watch the videos from the recent OpenStack summit.

Most importantly, whether you are supplier to the cloud or a user of the cloud, is to remain focused on the fact that the cloud is not an end in itself. It’s about driving agility for competitive advantage. If you don’t know why your organization needs the cloud, that’s the best place to start.

David M. Fishman is the Cloud Advisory Council Co-Chairman for Open Source Eco-system.