Amazon ElastiCache

Home » AWS Certification Cheat Sheets » AWS Certified SysOps Administrator Associate Cheat Sheets » Amazon ElastiCache


Amazon ElastiCache is a fully managed implementations of two popular in-memory data stores – Redis and Memcached.

Amazon ElastiCache is a web service that makes it easy to deploy and run Memcached or Redis protocol-compliant server nodes in the cloud.

The in-memory caching provided by ElastiCache can be used to significantly improve latency and throughput for many read-heavy application workloads or compute-intensive workloads.

It can be put in front of databases such as RDS and DynamoDB – sits between the application and the database.

Good if your database is particularly read-heavy and the data does not change frequently.

Also good for compute-heavy workloads such as recommendation engines and it can be used to store the results of I/O intensive database queries of compute-intensive calculations.

Elasticache can be used for storing session state.

Push-button scalability for memory, writes and reads.

In-memory key/value store.

Billed by node size and hours of use.

Elasticache EC2 nodes cannot be accessed from the Internet, nor can they be accessed by EC2 instances in other VPCs.

Exam tip: the key use cases for ElastiCache are offloading reads from a database, and storing the results of computations and session state. Also, remember that ElastiCache is an in-memory database and it’s a managed service (so you can’t run it on EC2).

Amazon ElastiCache with RDS

There are two types of engine you can choose from: Memcached, Redis

Click the image above to watch this video on Scaling Amazon ElastiCach Memcached


  • Simplest model and can run large nodes.
  • It can be scaled in and out and cache objects such as DBs.
  • Widely adopted memory object caching system.
  • Multi-threaded.

Amazon ElastiCache Memcached


  • Open-source in-memory key-value store.
  • Supports more complex data structures: sorted sets and lists.
  • Supports master / slave replication and multi-AZ for cross-AZ redundancy.
  • Supports automatic failover and backup/restore.

The following table provides a comparison of the different ElastiCache implementations:

Amazon ElastiCache Comparison

The following diagram depicts Amazon ElastiCache Redis with Cluster Mode disabled:

Amazon ElastiCache Redis Cluster Mode Disabled

The following diagram depicts Amazon ElastiCache Redis with Cluster Mode enabled:

Amazon ElastiCache Redis with Cluster Mode enabled

Caching strategies

There are two caching strategies available: Lazy Loading and Write-Through:

Lazy Loading

  • Loads the data into the cache only when necessary (if a cache miss occurs).
  • Lazy loading avoids filling up the cache with data that won’t be requested.
  • If requested data is in the cache, ElastiCache returns the data to the application.
  • If the data is not in the cache or has expired, ElastiCache returns a null.
  • The application then fetches the data from the database and writes the data received into the cache so that it is available for next time.
  • Data in the cache can become stale if Lazy Loading is implemented without other strategies (such as TTL).

Amazon ElastiCache Lazy Loading

Write Through

  • When using a write-through strategy, the cache is updated whenever a new write or update is made to the underlying database.
  • Allows cache data to remain up-to-date.
  • This can add wait time to write operations in your application.
  • Without a TTL you can end up with a lot of cached data that is never read.

Amazon ElastiCache Write-Through Caching

Dealing with stale data – Time to Live (TTL)

  • The drawbacks of lazy loading and write through techniques can be mitigated by a TTL.
  • The TTL specifies the number of seconds until the key (data) expires to avoid keeping stale data in the cache.
  • When reading an expired key, the application checks the value in the underlying database.
  • Lazy Loading treats an expired key as a cache miss and causes the application to retrieve the data from the database and subsequently write the data into the cache with a new TTL.
  • Depending on the frequency with which data changes this strategy may not eliminate stale data – but helps to avoid it.

Exam tip: Compared to DynamoDB Accelerator (DAX) remember that DAX is optimized for DymamoDB specifically and only supports the write-through caching strategy (does not use lazy loading).

Monitoring and Reporting

MemCached Metrics

The following CloudWatch metrics offer good insight into ElastiCache Memcached performance:

CPUUtilization – This is a host-level metric reported as a percent.ecause Memcached is multi-threaded, this metric can be as high as 90%. If you exceed this threshold, scale your cache cluster up by using a larger cache node type, or scale out by adding more cache nodes.

SwapUsage – This is a host-level metric reported in bytes. This metric should not exceed 50 MB. If it does, we recommend that you increase the ConnectionOverhead parameter value.

Evictions – This is a cache engine metric. If you exceed your chosen threshold, scale your cluster up by using a larger node type, or scale out by adding more nodes.

CurrConnections – This is a cache engine metric. An increasing number of CurrConnections might indicate a problem with your application; you will need to investigate the application behavior to address this issue.

Redis Metrics

The following CloudWatch metrics offer good insight into ElastiCache Redis performance:

EngineCPUUtilization – Provides CPU utilization of the Redis engine thread. Since Redis is single-threaded, you can use this metric to analyze the load of the Redis process itself.

MemoryFragmentationRatio – Indicates the efficiency in the allocation of memory of the Redis engine. Certain threshold will signify different behaviors. The recommended value is to have fragmentation above 1.0.

CacheHits – The number of successful read-only key lookups in the main dictionary.

CacheMisses – The number of unsuccessful read-only key lookups in the main dictionary.

CacheHitRate – Indicates the usage efficiency of the Redis instance. If the cache ratio is lower than ~0.8, it means that a significant amount of keys are evicted, expired or do not exist.

CurrConnections – The number of client connections, excluding connections from read replicas. ElastiCache uses two to four of the connections to monitor the cluster in each case.

Logging and Auditing

All Amazon ElastiCache actions are logged by AWS CloudTrail.

Every event or log entry contains information about who generated the request. The identity information helps you determine the following:

  • Whether the request was made with root or IAM user credentials.
  • Whether the request was made with temporary security credentials for a role or federated user.
  • Whether the request was made by another AWS service.

Authorization and Access Control

Access to Amazon ElastiCache requires credentials that AWS can use to authenticate your requests. Those credentials must have permissions to access AWS resources, such as an ElastiCache cache cluster or an Amazon Elastic Compute Cloud (Amazon EC2) instance.

You can use identity-based policies with Amazon ElastiCache to provide the necessary access.

You can use Redis Auth to require a token with ElastiCache Redis.

The Redis authentication tokens enable Redis to require a token (password) before allowing clients to run commands, thereby improving data security.