Quantcast
Channel: Plexxi » iron mountain
Viewing all articles
Browse latest Browse all 2

Cracking the cloud code

$
0
0

The cloud is one of those technology trends that seems to be perpetually on the cusp of becoming ubiquitous. But if recent analyst reports are any indication, cloud’s breakthrough moment is imminent. Late last year, Gartner predicted that in 2016, the bulk of new IT spend would shift to the public cloud, and that by the end of 2017, nearly half of all enterprises will have hybrid cloud deployments.

But if cloud has been around for so long, why will it take so long for cloud to become the dominant source of IT spend?

Psychology vs Technology

The determinant for most change is the underlying psychology that drives individuals and organizations. The IT industry as a whole has been underpinned by a deep-seeded need for control. The reason that most companies keep expertise in-house is that they want to maintain control—over their data, over the integration with their business workflows, over their schedules, and over their spend.

Of course control is under constant attack by cost. While traffic is booming, IT spend in most organizations continues to trend flat to down. This means that organizations need to constantly provide more compute resources, more storage, and faster interconnect while working with an increasingly less favorable ratio of admins to devices.

IT leaders looking to evolve their infrastructure for their business are left with a damning choice: do I give up control and move to the cloud, or do I operate under cash duress in trying to deliver an on-premises solution?

Hosting and Colocation as middle ground

The talk about hybrid clouds usually refers to hybrid environments where application workloads are executed both within a company’s owned datacenter and on some public cloud infrastructure. But there are other alternatives between private datacenter and public cloud.

Hosting and colocation service providers allow companies to own their own equipment, place it at a hosting site, and use hosting infrastructure for connectivity between sites and to the Internet. This model grants control of data to the IT organization while leveraging infrastructure and high-speed connectivity that exists at the colocation facility.

As colocation providers build out their own infrastructure, it also allows companies to burst workloads to a datacenter infrastructure that is physically adjacent to the company’s hosted servers. For applications that are particularly sensitive to performance, data locality is important, and this provides a more consistent means of elastically scaling resources during periods of high load.

[As a starting point for getting familiar with the benefits of colocation services, check out the MEF, Iron Mountain, Lumos Networks, and Plexxi as we walk through a case study for a large federal agency that has leveraged an ecosystem approach to IT. At 2pm EDT on September 17, we will examine how bandwidth, SDN, and colocation facilities come together. Register here.]

Implications of cloud workloads on hosting providers

Hosting providers are already charged with providing high-capacity, low-latency connectivity. But as business continuity and data proximity become more important to users, this means that hosting providers have to extend their presence to multiple physical locations. While the idea of geographical expansion might seem simple, it is actually non-trivial to offer connectivity between sites.

The physical fiber infrastructure can be expensive in and of itself. And providing multiple connections via different entry points to a facility is not always as easy as PowerPoint and simple network diagrams might suggest. Beyond that, extending a Layer-2 domain across physical distances requires rethinking network infrastructure. Where a WAN gateway might have previously sufficed, hosting providers have to also consider how best to support tenant applications across physical distances.

What the MEF has to say about it all

The Metro Ethernet Forum explicitly calls out the shift from a WAN paradigm to a more cloud-centric model. In the shift, they indicate changes in the fundamental services that will make up next-generation cloud services.
mef

Ultimately, the MEF sees a move to dynamic, assured services. Dynamism requires interfaces for customer input, and assured means a much tighter alignment around different treatment for different applications and tenants. The physical infrastructure that used to be platform and tenant-agnostic will need to be more aware and more capable if the new generation of services is to really be meaningful.

Where do you start?

As a hosting provider or even as a customer looking to take advantage of newer architectures available to you, where do you actually start? As with anything, it all starts with education. Minimally, you need to begin instrumenting your current environment to get a feel for where your operating expenses currently lie. Additionally, you will want to assess your own application infrastructure to determine how conducive it is to cloud deployments. You need to also consider how your applications are evolving, where your user base is, what your requirements for business continuity are, and how you expect to drive cost (both capital and operational) over a 3-5 year time horizon.

[Today’s fun fact: Wild camels once roamed Arizona’s deserts.]

The post Cracking the cloud code appeared first on Plexxi.


Viewing all articles
Browse latest Browse all 2

Latest Images

Trending Articles





Latest Images