Please use the menu below to navigate the article sections:
Along with storage and networking, compute is one of the key foundational building blocks of the cloud computing infrastructure layer. In this article, which is aimed at those who are new to cloud and computing in general, I discuss the basic concepts you need to understand to get started with compute on AWS.
Fundamentally the term “compute” refers to physical servers comprised of the processing, memory, and storage required to run an operating system such as Microsoft Windows or Linux, and some virtualized networking capability.
The components of a compute server include the following:
- Processor or Central Processing Unit (CPU) – the CPU is the brains of the computer and carries out the instructions of computer programs
- Memory or Random Access Memory (RAM) – within a computer memory is very high-speed storage for data stored on an integrated circuit chip
- Storage – the storage location for the operating system files (and optionally data). This is typically a local disk stored within the computer or a network disk attached using a block protocol such as iSCSI
- Network – physical network interface cards (NICs) to support connectivity with other servers
When used in cloud computing, the operating system software that is installed directly on the server is generally a hypervisor that provides a hardware abstraction layer onto which additional operating systems can be run as virtual machines (VMs) or “instances”. This technique is known as hardware virtualization.
A VM is a container within which virtualized resources including CPU (vCPU), memory and storage are presented, and an operating system can be installed. Each VM is isolated from other VMs running on the same host hardware and many VMs can run on a single physical host, with each potentially installed with different operating system software.
The diagram below depicts hardware virtualization with guest VMs running on top of a host OS:
There are two main types of hypervisor:
- Type 1 – the hypervisor is installed directly on top of the hardware and is considered a “bare-metal” hypervisor
- Type 2 – the hypervisor software runs on top of a host operating system
Examples of Type 1 hypervisors include VMware ESXi and Microsoft Hyper-V and examples of Type 2 hypervisors include VMware Workstation and Oracle Virtual Box. Type 1 hypervisors typically provide better performance and security than Type 2 hypervisors.
The diagram above shows a hardware virtualization stack using a Type 1 hypervisor. The diagram below depicts a Type 2 hypervisor:
As you can see, the key difference is that there is an additional host operating system layer that sits directly above the physical hardware and beneath the hypervisor layer.
Until recently Amazon Web Services (AWS) used the Xen hypervisor but has now moved to an internally developed hypervisor based on the Kernel-based Virtual Machine (KVM) technology. KVM is generally considered to be a Type 1 hypervisor.
Compute Instances on AWS
In AWS compute is consumed through the Elastic Compute Cloud (EC2) which is a web service from which you can launch “instances” which are essentially VMs running on the AWS KVM hypervisor.
Amazon EC2 provides secure, resizable compute capacity in the cloud on a pay-as-you-go basis with no fixed term contracts (unless you choose reserved instances to reduce cost).
There are a large selection of instance types you can choose from which come with varying specifications for vCPU, memory, and storage allocation.
Virtual networking is included with all instances and varies in performance level from low (unspecified performance) up to 25 Gigabit.
The image below shows a few “General Purpose” instance types. Note the different configurations for vCPU, Memory and Network Performance:
Instance types are categorized into families based on how the instance specifications are optimized for different usage scenarios. Optimizations that are available include compute, memory, storage, graphics processing (GPU) or general purpose usage.
The following table shows the instance families currently available and describes the use cases they are best suited for:
When deploying an instance on AWS the first step is to select an Amazon Machine Image (AMI). An AMI is essentially a template that includes the information required to launch an instance in EC2. An AMI includes the following:
- A template for the root volume for the instance (for example, an operating system, an application server, and applications)
- Launch permissions that control which AWS accounts can use the AMI to launch instances
- A block device mapping that specifies the storage volumes to attach to the instance when it’s launched
AWS provide a number of AMIs based on various operating systems and configurations. You can also select from the AWS Marketplace, AMIs that have been shared by the community, and your own AMIs that you have previously saved (registered).
Amazon EBS & Snapshots
Most EC2 instance types use the Elastic Block Store (EBS) for persistent storage. EBS volumes are durable, block-level storage volumes that can be attached to a single EC2 instance. There are a several different volume types available that differ in performance characteristics and price. These include:
- General Purpose SSD (gp2)
- Provisioned IOPS SSD (io1)
- Throughput Optimized HDD (st1)
- Cold HDD (sc1)
- Magnetic (standard, a previous-generation type)
Each EBS volume is replicated across multiple systems within an Availability Zone (described below) to avoid the risk of data loss if a single hardware component fails. Additionally, users can take snapshots of their EBS volumes which are a point-in-time copy of the data.
Snapshots are incremental backups, which means that only the blocks on the device that have changed after your most recent snapshot are saved.
AWS Infrastructure Services
There are a number of supporting services and features on AWS that enable compute instances to be launched in a functional state. These include:
- Virtual Private Cloud (VPC) – A VPC is a virtual network that provides the networking layer of EC2. A VPC can be configured to your own requirements
- Elastic Block Store (EBS) – EBS provides persistent block-based storage volumes that can be attached to EC2 instances
Amazon VPCs are created within AWS Regions, which is a separate geographic area in which multiple Availability Zones (AZs, which are essentially data centers) are located. Amazon provide more information on regions and availability zones here.
Subnets are created within AZs and this is where Amazon EC2 instances are deployed. The following diagram depicts this AWS infrastructure:
Additionally, to be able to connect to your EC2 instances on the AWS cloud it is necessary to configure Security Groups, which are firewalls at the instance level, and Network Access Control Lists (NACLs), which are firewalls at the subnet level.
When a VPC has been properly configured, EC2 instances have been launched with public IP addresses, and Security Groups and NACLs have been configured with the correct rules, it is then possible to directly access EC2 instances from the Internet.
The following simplified diagram depicts the configuration elements required to connect to an Amazon EC2 instance from the Internet.
The diagram shows two EC2 instances with separate security groups but in the same subnet within a VPC. An Internet Gateway provides the Internet connectivity and in this configuration each EC2 instance would require a public IP address:
Logging on to EC2 instances involves usage of a key pair (cryptographic key) that you generate through the console and in some cases a password.
Load Balancing and Auto Scaling
Cloud applications are usually deployed in an architecture where multiple instances can share the incoming traffic load and individual instances can be easily added or removed as the load varies up or down.
AWS provide a couple of services that assist with distributing incoming connections and automatically ensuring the right number of instances are available to service the load. These are Elastic Load Balancing and EC2 Auto Scaling.
The following diagram depicts an Elastic Load Balancer (ELB) servicing a number of EC2 instances across two Availability Zones. Connections from multiple devices hit the ELB which then distributes the connections evenly across the EC2 instances.
Elastic Load Balancing provides the following benefits:
- High availability – ELB automatically distributes traffic across multiple EC2 instances in different AZs within a region
- Security – ELB provides robust security features, including integrated certificate management, user-authentication, and SSL/TLS decryption
- Elasticity – ELB is capable of handling rapid changes in network traffic patterns
EC2 Auto Scaling can complement the architecture depicted in the diagram above by dynamically scaling the number of EC2 instances based on current demand.
EC2 Auto Scaling provides the following benefits:
- Fault tolerance – Auto Scaling detects when an instance is unhealthy and replaces it
- Scalability and elasticity – Auto Scaling automatically scales the number of instances servicing your application based on demand
If you would like to get started with cloud compute on AWS, the following webpage provides a series of links to instructional articles for launching EC2 instances with varying configurations:
If you don’t already have one, sign-up for an AWS free tier account that provides you access to many AWS services at no cost:
This article is part of a series, please also check out:
- What is Cloud Computing? Cloud vs Legacy IT
- Cloud Computing Service Models – IaaS, PaaS, SaaS
- Cloud Computing Deployment Models – Public, Private & Hybrid
- Cloud Computing Basics – Compute
- Cloud Computing Basics – Storage
- Cloud Computing Basics – Network
- Cloud Computing Basics – Serverless
Learn how to Master the AWS Cloud
AWS Training – Our popular AWS training will maximize your chances of passing your AWS certification the first time.
Membership – For unlimited access to our cloud training catalog, enroll in our monthly or annual membership program.
Challenge Labs – Build hands-on cloud skills in a secure sandbox environment. Learn, build, test and fail forward without risking unexpected cloud bills.