Click to share! ⬇️

Cisco Cloud Fundamentals

Knowing Cisco Cloud Fundamentals is a great path to further your knowledge of the cloud ecosystem. We all know and love Cisco as a provider of enterprise-level networking hardware and software. Cisco has long offered a large range of switches and routers to meet the demands of everything from small and mid-size businesses, all the way up to core services of multinational communications providers. Cisco is now also in the game of providing servers, virtualization, and cloud services. We’ll take a look at all of those cisco cloud fundamentals now.


Storage Networking Fundamentals

There are many options for storage of information in a data center environment. We have things like DAS, NAS, SAN, and more. Each of these has its own suite of protocols that make the technology work. DAS stands for direct attached storage, and this is where you would have storage directly attached to a compute node for example. This differs from NAS or network attached storage which is separated from the compute node by a network. Finally, we have SAN or storage area network where you have a network built and dedicated to nothing more than dealing with high volume, high throughput data storage.

Technologies you might see in use for these various storage environments appear like so.

  • DAS makes use of SCSI which stands for Small Computer System Interface.
  • NAS might make use of CIFS or Common Internet File System / SMB Server Message Block.
  • SAN makes use of iSCSI or FC, those being Internet Small Computer System Interface and Fibre Channel.

Block vs File-Based Storage

You’ll sometimes hear the terms Block and File-based storage and wonder what that means. Block storage is typically found in a SAN and makes use of things like SCSI and Fibre Channel. The benefit of block storage is that it is lightning fast and highly available. One drawback, however, is that it is fairly complex with more administrative overhead. On the other hand, File based storage makes use of CIFS and NFS. CIFS is found in Windows environments, while NFS or network file system is more geared towards UNIX and Linux installations. These are things you would likely find in network-attached storage scenarios. Access is a bit slower than a block but is much more simple to set up and administer.

There is a common theme of accessing storage between the devices that need access to resources, and the devices that are actually holding the resources. We refer to these as the Initiator and the Target.

  • Initiator – An initiator is the device that needs access to resources.
  • Target – The storage array is referred to as the target.

NAS Storage Fundamentals

Network Attached Storage
Consider a scenario where there are some workstations that need a large amount of storage. So much so, that a directly attached storage configuration would not be feasible. If you don’t have the resources to build out a powerful, complex, and expensive SAN, then a Network Attached Storage might be the way to go. In the diagram above, the application servers on the right-hand side are going to want to access the storage available on the NAS appliances. NAS makes use of CIFS and NFS which do have a bit of latency and chatter associated with them. Current iterations of NAS, however, have minimized these caveats, and in a scenario where you need things like Microsoft Office, Sharepoint, and generalized file and print services – a NAS makes perfect sense. One more quick tip for NAS is to be careful making use of no_root_squash option. This allows an NFS share to be mounted and written to as the root user by the hypervisor which could be a security risk.

Thick vs Thin Storage Provisioning

One thing to keep in mind when provisioning storage on a NAS, is using thick vs thin provisioning. They differ in the way resources are allocated. If you set aside one terabyte of storage for a particular purpose as a thick provision, this is a hard set value. If the consumer of that resource is only making use of 20% of that terabyte, this is wasteful. That remaining 80% is sitting around like a wasted resource that others can not access. Better use of resources would be to thin provision the storage. When you thin provision, the storage acts in an elastic type manner. When you thin provision that same one terabyte, the consumer still has access to the full terabyte but only on an as-needed basis. If only 10% is being used, the other 90% will be available for use by the rest of the data center.


SAN Storage Fundamentals

Storage Area Networking
Storage Area Networks sit at the pinnacle of data access. DAS and NAS work well in a large percentage of installs. A SAN is implemented as a Fabric Topology via 10 Gigabit per second Fibre Channel over Ethernet technology. This is also referred to as FCoE. It is faster, more scalable, less latent, more robust, more complex, and also generally more expensive than DAS or NAS technologies. This Fabric Interconnect switching shares some analogies to the LAN world if you are familiar. In a LAN, devices make use of MAC addresses to uniquely identify themselves on a network. In a SAN, it is the WWPN or World Wide Port Name which gets assigned to a port in the fiber channel fabric. Just as with a LAN we can use Virtual LANs or VLANs to partition resources in the network, in a SAN we can also make use of a VSAN. A VSAN is a virtual storage area network that also provides a type of partitioning of resources within the SAN. In addition to using VSANs, you can be more granular with Zoning within the VSAN. By making use of Zoning, you can set up a single initiator to a single target, single initiator to multiple targets, or multiple initiators to multiple targets. Within the fiber channel network, frames are routed by a 24-bit field called the FCID. Also of note is that FCoE has a specific initialization protocol to adhere to which includes VLAN Discovery, FCF Discovery, and FLOG I/F Discovery. Another method of partitioning a storage resource would be via LUN masking. LUN or logical unit number identifies the logical piece of storage an initiator can reach. A LUN may span multiple physical disks. Note that a SCSI LUN and Cinder Volume require that their block device is attached to an instance before any file system commands can be administered. In addition, to file-based storage, we have object-based storage such as OpenStack Swift, offering similar functionality to what you might find with Amazon S3.


The InterCloud

In cloud computing, we are now faced with many different flavors of the cloud. Of course, we have the Public Cloud, the Private Cloud, the Community Cloud, as well as the Hybrid Cloud. The community cloud may be new to you. This particular flavor of cloud refers to a specific use case cloud which is shared by many companies. Of all of these, the Hybrid cloud is really winning mind share with information technology departments of the biggest companies in the world. Businesses want the benefits of the cloud, but also need to be aware of security and intellectual property considerations. In a hybrid environment, a company may choose to make use of a private cloud for some aspects of the business, a public cloud for others, and a community cloud for yet other business objectives. This can be a slow, complex process, with challenges of administration. Cisco offers a solution called the InterCloud to tackle these challenges head-on to offer self-service for hybrid resources and secure connectivity between public and private Clouds.

There are three different components of the InterCloud. These are the Fabric Director, the Fabric Provider Platform, and the Fabric Secure Cloud Extension. These components work together to allow a cloud deployment model that makes use of interoperability features of various different cloud providers. Instead of having disparate pieces of software and platforms to administer, one can make use of this solution from Cisco to act as the single point of management for a hybrid cloud deployment model. It is an effort to provide consistency, compliance, control, and choice for the customer. The InterCloud acts as a middleware to make seamless communication between private, public, and community deployments.


Server Virtualization

In the past, the only way we could scale servers was to pack in more RAM and CPUs into the existing hardware available to us. This is known as scaling up. When you ran out of the ability to maximize any one machine any further, you started adding more physical machines. This is known as scaling out. Over time, this process would repeat itself leading to server sprawl, inefficient use of physical space, large cooling bills, and large electricity bills. All of this changed with the miracle of the Hypervisor. The Hypervisor is a piece of software typically installed on top of a computer node. The hypervisor does some really cool things like masking server resources or abstracting away the number and types of physical servers from the end-users. Surely you must be familiar with VMWare’s ESXi platform, or the Linux KVM, and the Microsoft Hyper-V. These are all popular hypervisors that you would likely find in any data center today.

Type-1 Hypervisor

Type-1 is the preferred method for server virtualization. Think of a Type-1 Hypervisor as a bare-metal approach, meaning the software sits directly on top of the hardware. There is no intermediate operating system in place. VMWare’s ESXi platform runs directly on the host hardware to control and manage the guest operating systems as a Type-1 hypervisor.

Type-2 Hypervisor

A Type-2 hypervisor sits on top of an operating system rather than the bare metal. This is a popular approach to use on a Workstation where you would like to have access to another type of operating system of application for various reasons. Think Oracle VirtualBox, VMWare Fusion, and VMWare Workstation.

Server Virtualization Benefits

  • Hardware Consolidation
  • Uniform Resource Pools
  • Simplified Resource Sharing
  • Utilization Optimization
  • Best use of physical hardware
  • Less Expensive

Live Migration: Another amazing feature of server virtualization is the ability to transfer a running virtual machine between hosts. When this happens, the I/O and CPU calls are queued to ensure that during the transfer CPU and Memory state are moved, no interruption to the clients can occur. In addition to this, when you have multiple virtual machines on the same physical hardware, they can make use of an SR-IOV which virtualizes the PCI host bus so it can be allocated to each guest machine.


Network Virtualization

Not only has server virtualization really exploded in popularity, but entire networks are also now getting virtualized. SDN or software-defined networking is taking over the data center. If you think about it, we really need virtual networking in order to make server virtualization possible. On virtual machines is the concept of a virtual NIC, vNIC, or Virtual Network Interface Card. With the physical servers, we needed to of course plug them into a physical switch to get network connectivity. In the virtual world, on our virtual machines, we also need to “virtually” plug their virtual nic into a virtual switch. I know, it is a bit confusing! With multiple virtual machines scattered across many different physical servers, how can they communicate with one another? They do so by way of a DVS or Distributed Virtual Switch. An example of this type of switch is Cisco and VMware created Nexus 1000v. The goal of the 1000v is to provide a consistent interface you will be familiar with as it shares a similar feel to any other NX OS-based switches. The 1000v makes use of ARP Address Resolution Protocol, CDP, vPath for flow redirection in vService interaction, and high-speed VPC Virtual Port Channeling. You can even talk to remote data centers over Layer 2 with OTV or Overlay Transport Virtualization. An advantage that DVS has over vSwitch is that it supports private VLANs. Port configurations make use of Atomic Inheritance to provide a consistent configuration among all ports in a profile.


VXLAN

You may be familiar with a VLAN in the Layer 2 world. In the virtualized switching world, we have VXLAN, and it’s like a VLAN on Steroids. With standard VLANS, you have the ability to make use of up to 4096 virtual LANs. When VXLAN was created, the engineers went bat shit crazy. They allow you to use up to 16,777,216 different VLANS! You’ll never have a need for this many, but it is a nice data point to know. VXLAN, therefore, allows us to make use of an increased VLAN address space, multi-tenancy, and connectivity across disparate data centers. If you are familiar with general IP and data networking, you’ll know that we are able to get those 4096 VLANs because of the fact that the VLAN id field in an ethernet frame has a 12-bit field specifying the VLAN to which the frame belongs. We apply 2 to the power of 12, giving us 4096. The VXLAN Identifier space is 24 bits! With this knowledge, we apply 2 to the power of 24 and that is how we get that 16 million-plus number. A VLAN identifier gets stuffed right into the ethernet frame. The VXLAN identifier differs in that it makes use of the Internet Protocol as the transport medium, specifically a UDP packet.


UCS Unified Computing System

This brings us to the very heart of Cisco Cloud Fundamentals and that would be the Unified Computing System Platform. The UCS platform is a highly converged infrastructure that aims to take all of your compute and network resources and combine them together under one piece of hardware. It is the beginning of a hyper-converged infrastructure. In less advanced data centers, there may be various collections of physical servers that are labeled as “compute”. Beyond that, there are all kinds of physical networking infrastructure that would need to be managed by a different group. UCS aims to combine separate silos of functionality and give administrators total control and orchestration of the unified system. UCS is powerful. It has a 10 Gigabit unified network fabric, virtualization optimization, unified management, service profiles, and many flexible IO options. This is just the start.


UCS Fabric Interconnect

The Fabric Interconnects are really the soul of the UCS platform. You would typically have two of them for redundancy, and they are the brains and the beauty. All management capabilities live in the Fabric Interconnects. UCS Manager, which offers an XML API, is a part of the fabric interconnects. The fabric interconnect is an aggregation layer for all interfaces in the system. This means that all downstream servers lead back to the fabric interconnect. All components of the UCS system must communicate via the fabric interconnect.


UCS IO Modules

IO Modules are also known as a FEX. These are the Fabric Extenders, which are on the back of a UCS chassis and they provide connectivity between any blades that are populated in the front of the UCS chassis.


UCS Interface Adapters

Interface adapters are located inside of the various blades and they get traffic in and out of the blade. Cisco offers what is called a VIC or Virtual Interface Card which allows one to create dynamic interfaces. These are identified as VNICs and VHBAs in the UCS. It is a way of making one physical card behave such that it supports multiple virtual interfaces.


UCS Management

UCS Manager is a graphical user interface written in Java that you can use to manage all aspects of your Cisco UCS system. If you had a need for a Multi-Domain Cisco UCS environment, then you would make use of something called Cisco UCS Central – but that is much less common in the field. By default, you will have the role of Server Equipment Administrator which provides read and write access to physical server operations and just read access to other parts of the system. Most configuration is handled via service profiles, which contain hardware identifiers, firmware, state, configuration, and connectivity characteristics. If a GUI is not your thing, you can still manage UCS from the command line by connecting via SSH to the cluster IP address of the UCS Fabric Interconnect System. Another option is Cisco UCS Director, good for a single FlexPod. A good tip when managing the Fabric Interconnect is leaving the switching mode alone. Changing it is very disruptive to the system.


Cisco Cloud Fundamentals Summary

Wow, there sure is a lot to cover for the fundamentals of the cloud in a Cisco environment. Hopefully, this overview of the fundamentals has been useful for you. We covered a lot, such as Storage Networking Fundamentals, DAS, NAS, SAN, Block vs File-Based Storage, NAS Storage Fundamentals, Thick vs Thin Storage Provisioning, SAN Storage Fundamentals, InterCloud, Server Virtualization, Type 1 and 2 Hypervisors, Server Virtualization Benefits, Network Virtualization, and Cisco’s answer to the converged infrastructure, Cisco Unified Computing System.

Click to share! ⬇️