Introduction to Cloud Computing

Introduction to cloud computing

In this article, we’re going to take a look at an Introduction to Cloud Computing Concepts. It’s a great way to really take a dive into Cloud Computing technologies and get a strong grasp on the fundamentals of what we need to know to be successful in an increasingly virtualized and cloud-connected world. Many cloud platforms offer freely available open-source toolkits made exclusively for cloud computing. An application programming interface makes it possible to control pools of resources such as compute, storage, and networking on commodity-based hardware. These APIs are usually handled in a RESTful way, so we can make use of the familiar HTTP verbs for various API calls. Let’s take a closer look at an introduction to cloud computing now.

Cloud Computing vs Virtualization

We’ll start by taking a look at the similarities and differences between cloud computing and virtualization. Back in the stone ages of tower computers in the data center, we had a workflow that might resemble something like this. A user would request access to a new application. The administrator would then provision an actual server, a piece of hardware, to support that application. Now maybe this user is also a developer, and he or she is going to need a separate development environment to work in. Ok, we’ll go ahead and provision another server for you.
traditional servers
Maybe you also need to consider disaster recovery and backup. Guess what, yep – another server! This leads to a bit of a sprawling expanse of servers which becomes difficult to maintain. Unsightly tower computers would be cluttering up the data center as this process moved on. This was inefficient, but it is how things used to be done as we had no other choice.

As this process continued, we eventually made a move to what people refer to as a blade environment. So what is a blade? Well, if you are familiar with live music or you are a musician, you might be familiar with the idea of putting your processing equipment in a rack. The same exact idea applies here in data center computing. What we did was to take traditional tower computers and stick them into a rectangular shape that could be rack mounted. Each blade would typically take up one or two rack spaces and would offer a more converged solution in offering computing power to end-users.
rack-mounted servers
At this point, when a developer needed a new playground to play in, we would deploy a build environment on one of our collection of rack servers to meet this need. We could also create a disaster recovery scenario on the next available blade server. This provided for a much greater density and availability of servers. At this point, we could fit maybe 20 servers in a rack-mounted unit. This was good, but we still hadn’t begun to virtualize things yet.

Enter Virtualization

Moving further along, we needed to make better use of the blade-based computing rack. Whereas before, one operating system was installed per available server and we would run whatever applications we needed. It was the introduction of the Hypervisor which changed all of this. Ince the Hypervisor came into play, it would be the software that ran just above the bare metal. On top of the Hypervisor, one could run an almost limitless amount of virtual servers or virtual machines to support whatever our imagination could come up with. Of course, the Hypervisor would be something like VMWare, Xen, KVM, or Hyper-V.
introduction to virtualization
The Hypervisor layer made it possible to virtualize entire machines and provide a pool of computing resources to an end-user. From there, a user would make a request of DevOps or IT for a particular resource. In the old days, a physical server would need to be deployed to support this request. Maybe the IT technician didn’t even have one available right away, so the user would have to wait. This was not ideal. With the Hypervisor in place, our DevOps person could now log into the virtualization platform and “spin up” an instance of a new server. As if by magic, “Poof!”, here you go – a brand new Ubuntu server machine. Want another instance for testing or disaster recovery? No problem, spin up more servers in seconds. Maybe you need a Windows Server for testing. Check. Want to test the ability to scale? Sure thing, spin up another instance and configure load balancing. At this point, we can see the basic Hypervisor reduced time to market for users, decreased dependency on physical hardware, and generally made things much easier. This is Virtualization. Virtualization has progressed even further than this, however.

Welcome To The Cloud

The cloud means many different things to many different people. At the end of the day, however, a cloud can be distilled down to three properties. A cloud should be…

  • On Demand

  • Elastic

  • Self Servicing

Let’s examine some of these properties as they relate to the cloud.

On Demand: The on-demand characteristic of a cloud means it can provide service almost immediately at the request of a user or endpoint. You don’t put in a ticket with the help desk and hope for feedback within two business days. On-demand means now, and this is exactly what most good cloud providers offer.

Elastic: When we say elastic, what this means is that the resources required to support your service must be able to grow and shrink with ease. Of course, we know Amazon Web Services uses the term Elastic to describe this very feature of their cloud offerings.

Self Service: A cloud service is typically self service. This means anyone with a network connection can log into a dashboard or admin panel, and deploy resources at will with no need of a third party intervening. There should be no need for any physical deployment, and no need to painstakingly build out infrastructure. You just select your service from a catalog of offerings and hit “go”.

OnPrem vs XaaS Offerings

on-prem computing stack
The most common type of on-prem computing stack may look something like the diagram here. This is currently very common in most data centers, and will likely continue to be so even as the cloud expands since many businesses want to make use of a hybrid or private cloud solution. Rather than place 100% of their data assets into the cloud, some commodity-based functions can be placed in the cloud, while highly guarded intellectual property may continue to be hosted by an on-prem solution. We can see we have a Networking layer, a storage layer, and a Server layer to make up the physical components of the infrastructure. Moving higher up the stack, we have our Hypervisor, the real meat of this operation, a guest operating system, various middlewares, runtime environments, and an application layer. To be fair, there is even much much more than this happening in a modern data center, but this gets the idea across. If you work for a large corporation, this is likely the type of on-prem suite of hardware and software that your IT department manages. Need a resource from this infrastructure? Put in a ticket. Then be prepared to wait, perhaps a long time! With this type of model, you are dependent on the third party for service. There is no self-service involved here. So while your IT folks are certainly making good use of virtualization, you are not getting the benefits of the cloud.

Cloud Computing Services Offerings

This brings us to the idea of an as-a-service offering. When we talk about a Service in a cloud computing environment, we can think of these traits.

  • Used to describe the control layers in the computing stack

  • Underlying services are abstracted

  • End users do not need to worry about lower level layers

  • A higher focus is placed on providing agnostic service layers

Infrastructure as a Service

Infrastructure as a Service Diagram

This diagram shows us a good approximation of what an Infrastructure-as-a-Service entails. In the traditional IT services stack, we looked at earlier, we could see that the IT department had complete control of the entire infrastructure all the way up to the application. The end-user had no part in any of the layers. IaaS takes this model and abstracts away some of the lower-level layers so you don’t even need to think about them. It is simply a given that they are there, they are available, and they will work for you. The cloud offering allows the end-user to focus on the higher layers, which is what they care most about. Infrastructure as a service is the most popular types of cloud computing available. The big players offering this type of service include Microsoft Azure, Amazon Web Services, Rackspace, Google Cloud Platform, vCloud Air, IBM Softlayer, and many others.

We can see that the diagram for the IaaS vs OnPrem is a bit different. The lower layers such as Virtualization, Servers, Storage, and Networking have been abstracted away by the IaaS provider for us. We can no reference to this batch of layers as Service Provider controlled. Above the Service, Provider layers are where the consumer now comes into play. It is the consumer that can not provide the operating system they would like to use, any middlewares they might need, a specific runtime like Java or PHP, data stores, and ultimately the application layer. This puts an enormous amount of power in the hands of an end-user. Consider this, an IaaS service like AWS Amazon Web Services hosts websites as humble as a Mom and Pop WordPress blog, all the way up to massively scaled applications that support literally millions of dollars in revenue per year. All in the cloud! A small startup company has no need to go out and purchase physical servers, operating systems, networking equipment, or even office space for that matter! Bare metal is no longer a requirement, it has been abstracted away. In addition, the end-user can start with extremely minimal services, and then scale on-demand to however much processing, storage, and compute power they may need in the future. IaaS provides the following.

  • Abstraction of hardware layers

  • End users commission guest resources

  • Eliminate need to purchase physical hardware

  • On demand bi directional scaling

  • Pay as you go

Platform as a Service

Platform as a Service
Moving further along in this introduction to cloud computing, we will talk about PaaS or Platform as a service offering. PaaS is similar to IaaS but it offers an even higher level of abstraction than does the IaaS paradigm. Let’s consider an end-user that is a PHP developer and wants to build an application for the cloud. In the IaaS paradigm, you would need to provision your own virtual machine and operating system. Perhaps you would deploy an Ubuntu Linux machine with an Nginx web server, a MySQL Database, and some other data store technologies. You are just the developer however, you want to write your code and forget about all of those steps. Configuring a full LEMP or LAMP stack by hand is a lot of work with a lot of dials to turn and configure. If the end-user is not up to this task, one may consider going the route of Platform as a Service. In this scenario, the Service Provider moves further up the stack and handles, even more, lower-layer resources for you. With the operating system, middleware, and runtime all configured for you, all you have to do is write and publish your application at this point. It is an even easier method of launching your own service or application but does expect to pay a slight premium over traditional IaaS offerings since more of the work is done for you. Examples of this type of offering include Pagoda Box, Engine Yard, Fortrabbit, Heroku, Deis, Cloud Foundry, Cloudbees, Appharbor, and many more.

More Types of Clouds

You have most likely heard of the terms public cloud, private cloud, and hybrid cloud. Let’s talk a little bit about these types of clouds.

Public Cloud

These are the types of clouds that we are all the most familiar with since this is the most common type of offering. Typically what this means is as an end-user, we simply make requests into the service provider cloud via the public internet and we get access to the resources rendered as a public cloud.

Private Cloud

A private cloud works much the same way as a public cloud, but the difference is that the hardware that makes all of this possible is OnPrem or on-premises. This is what you will commonly find today in large corporations that had networks and applications long before IaaS and the public cloud even existed. A private cloud might work much the same way as an Amazon Web Services would, albeit it is built and managed by a private company for a specific set of users.

Hybid Cloud

Finally, we come to the private cloud which is rapidly gaining popularity. The hybrid cloud makes use of the characteristics of both Public and Private Cloud paradigms in order to best meet the needs of an organization. There are of course pros and cons to IaaS public cloud services and Private OnPrem cloud services. Demanding clients in the technology industry are now making use of the best of both worlds by using aspects of the public cloud where appropriate and aspects of the private cloud for other use cases. One use case for a hybrid cloud is the ability to move resources between the public and private clouds as needed. In developing an application during the test phase, one might make use of a public cloud. When the application is ready for production and security and safety are of utmost concern, that resource can be migrated from the public cloud to the private domain with relative ease in a hybrid environment.

Cloud Application Programming Interfaces

The last topic we’ll cover in this introduction to cloud computing is the idea of cloud computing application programming interfaces. The modern approach for implementing this is via REST, which stands for Representational State Transfer. REST makes use of HTTP verbs such as GET POST PUT PATCH and DELETE. By using this approach users can send API requests over the public internet via these common HTTP verbs and command resources in the public cloud. In addition, since these are standards-based mechanisms, it doesn’t matter what language you are familiar with. You could write API requests in PHP, Ruby, Python, C++, Erlang, Node, or any others that you like. The API is simply going to respond to the HTTP verbs. These verbs are enough to allow users to create resources, read information about existing resources, update resources, or delete resources. This idea of CRUD works much like the idea of working with a database-driven application where a user can Create Read Update and Delete records in a database. The difference is that in cloud computing we might be creating an abstract pool of storage resources. In another case, we might be reading the existing properties of a compute resource or updating a network resource. Just make sure not to accidentally delete any resources!