Amazon EC2 placement groups are a logical grouping of instances in one of three configurations.
When you launch a new EC2 instance, the EC2 service attempts to place the instance in such a way that all of your instances are spread out across underlying hardware to minimize correlated failures.
You can use placement groups to influence the placement of a group of interdependent instances to meet the needs of your workload.
There are three placement strategies available with Amazon EC2 placement groups: cluster, spread, and partition.
The table below describes some key differences between clustered and spread placement groups:
The following sub-sections provide more details on the three strategies for Amazon EC2 placement groups.
Cluster Placement Group
Clusters instances into a low-latency group in a single AZ:
- A cluster placement group is a logical grouping of instances within a single Availability Zone (cannot span AZs).
- A cluster placement group can span peered VPCs in the same Region.
- Instances in the same cluster placement group enjoy a higher per-flow throughput limit of up to 10 Gbps for TCP/IP traffic and are placed in the same high-bisection bandwidth segment of the network
- Cluster placement groups are recommended for applications that benefit from low network latency, high network throughput, or both, and if the majority of the network traffic is between the instances in the group.
- Must use a supported instance type.
A partition placement group supports a maximum of seven partitions per Availability Zone. The number of instances that you can launch in a partition placement group is limited only by your account limits.
When instances are launched into a partition placement group, Amazon EC2 tries to evenly distribute the instances across all partitions. Amazon EC2 doesn’t guarantee an even distribution of instances across all partitions.
A partition placement group with Dedicated Instances can have a maximum of two partitions.
Partition placement groups are not supported for Dedicated Hosts.
AWS recommend that you launch your instances in the following ways:
- Use a single launch request to launch the number of instances that you need in the placement group.
- Use the same instance type for all instances in the placement group.
Troubleshooting cluster placement groups:
- If you try to add more instances to the placement group later, or if you try to launch more than one instance type in the placement group, you increase your chances of getting an insufficient capacity error.
- If you stop an instance in a placement group and then start it again, it still runs in the placement group. However, the start fails if there isn’t enough capacity for the instance.
- If you receive a capacity error when launching an instance in a placement group that already has running instances, stop and start all of the instances in the placement group, and try the launch again. Starting the instances may migrate them to hardware that has capacity for all of the requested instances.
Spread Placement Group
Spreads instances across underlying hardware (can span AZs):
- A spread placement group is a group of instances that are each placed on distinct underlying hardware.
- Spread placement groups are recommended for applications that have a small number of critical instances that should be kept separate from each other.
Spread placement groups are recommended for applications that have a small number of critical instances that should be kept separate from each other.
Launching instances in a spread placement group reduces the risk of simultaneous failures that might occur when instances share the same racks.
Spread placement groups provide access to distinct racks, and are therefore suitable for mixing instance types or launching instances over time.
A spread placement group can span multiple Availability Zones in the same Region. You can have a maximum of seven running instances per Availability Zone per group.
Spread placement groups are not supported for Dedicated Instances or Dedicated Hosts.
Troubleshooting spread placement groups:
- If you start or launch an instance in a spread placement group and there is insufficient unique hardware to fulfill the request, the request fails. Amazon EC2 makes more distinct hardware available over time, so you can try your request again later.
Partition Placement Group
Divides each group into logical segments called partitions:
- Amazon EC2 ensures that each partition within a placement group has its own set of racks.
- Each rack has its own network and power source. No two partitions within a placement group share the same racks, allowing you to isolate the impact of hardware failure within your application.
- Partition placement groups can be used to deploy large distributed and replicated workloads, such as HDFS, HBase, and Cassandra, across distinct racks.
A partition placement group can have partitions in multiple Availability Zones in the same Region. A partition placement group can have a maximum of seven partitions per Availability Zone. The number of instances that can be launched into a partition placement group is limited only by the limits of your account.
In addition, partition placement groups offer visibility into the partitions — you can see which instances are in which partitions. You can share this information with topology-aware applications, such as HDFS, HBase, and Cassandra. These applications use this information to make intelligent data replication decisions for increasing data availability and durability.
Troubleshooting partition placement groups:
- If you start or launch an instance in a partition placement group and there is insufficient unique hardware to fulfill the request, the request fails. Amazon EC2 makes more distinct hardware available over time, so you can try your request again later.