Transparent Data Encryption (TDE) is used for Oracle and SQL server.
TDE encryption ins handled within the DB engine.
RDS Oracle supports integration with CloudHSM.
AWS KMS encryption is used for data at REST.
Encryption at rest applies to underlying EBS volumes, logs, snapshots and replicas.
Can use AWS or customer-managed CMKs.
Encryption is handled by the host / EBS.
SSL encryption can be configured for RDS database for data in-transit.
IAM authentication can be enabled for MySQL and PostgreSQL.
When using IAM auth, the authorization takes place within RDS
IAM authentication works using a token generated by calling IAM (generate-db-auth-token).
A user or role with a policy is mapped to a local RDS user – token is used in place of password.
Encryption with RDS
Amazon RDS can encrypt your Amazon RDS DB instances.
Data that is encrypted at rest includes the underlying storage for DB instances, its automated backups, read replicas, and snapshots.
Amazon RDS encrypted DB instances use the industry standard AES-256 encryption algorithm to encrypt your data on the server that hosts your Amazon RDS DB instances.
After your data is encrypted, Amazon RDS handles authentication of access and decryption of your data transparently with a minimal impact on performance.
You don’t need to modify your database client applications to use encryption.
Encryption can be added at database creation time only.
You can choose a customer managed CMK or the AWS managed CMK for Amazon RDS to encrypt your DB instance.
It is NOT possible to add encryption to an existing database.
It is also not possible to unencrypt an existing database.
The following limitations exist for Amazon RDS encrypted DB instances:
- You can only enable encryption for an Amazon RDS DB instance when you create it, not after the DB instance is created.
- You can’t disable encryption on an encrypted DB instance.
- You can’t create an encrypted snapshot of an unencrypted DB instance.
- A snapshot of an encrypted DB instance must be encrypted using the same CMK as the DB instance.
- You can’t have an encrypted read replica of an unencrypted DB instance or an unencrypted read replica of an encrypted DB instance.
- Encrypted read replicas must be encrypted with the same CMK as the source DB instance when both are in the same AWS Region.
- You can’t restore an unencrypted backup or snapshot to an encrypted DB instance.
- To copy an encrypted snapshot from one AWS Region to another, you must specify the CMK in the destination AWS Region. This is because CMKs are specific to the AWS Region that they are created in.
- You can’t unencrypt an encrypted DB instance. However, you can export data from an encrypted DB instance and import the data into an unencrypted DB instance.
Encryption can be added to unencrypted snapshots by copying them.
Exam tip: Because you can encrypt a copy of an unencrypted snapshot, you can effectively add encryption to an unencrypted DB instance. That is, you can create a snapshot of your DB instance, and then create an encrypted copy of that snapshot. You can then restore a DB instance from the encrypted snapshot, and thus you have an encrypted copy of your original DB instance.
Aurora Replicas provide both read scaling and availability.
Aurora Replicas are also known as reader instances.
You can issue queries to Aurora Replicas to scale the read operations for your application.
If the writer instance in a cluster becomes unavailable, Aurora automatically promotes one of the reader instances to take its place as the new writer.
An Aurora DB cluster can contain up to 15 Aurora Replicas.
Aurora Replicas can be distributed across the Availability Zones that a DB cluster spans within an AWS Region.
Backtrack allows rolling back to point in time in place.
Can publish general logs, slow query logs, and error logs to Amazon CloudWatch.
Publishing Aurora logs to CloudWatch Logs allows you to maintain continuous visibility into database activity, query performance, and database errors.
The MySQL error log is generated by default; you can generate the slow query and general logs by setting parameters in your DB parameter group.
DynamoDB Streams captures a time-ordered sequence of item-level modifications in any DynamoDB table and stores this information in a log for up to 24 hours.
Applications can access this log and view the data items as they appeared before and after they were modified, in near-real time.
Encryption at rest encrypts the data in DynamoDB streams.
A DynamoDB stream is an ordered flow of information about changes to items in a DynamoDB table. When you enable a stream on a table, DynamoDB captures information about every modification to data items in the table.
You can choose the information that will be written to the stream:
- Keys only — Only the key attributes of the modified item.
- New image — The entire item, as it appears after it was modified.
- Old image — The entire item, as it appeared before it was modified.
- New and old images — Both the new and the old images of the item.
DynamoDB Streams with AWS Lambda functions:
- Amazon DynamoDB is integrated with AWS Lambda so that you can create triggers—pieces of code that automatically respond to events in DynamoDB Streams.
- With triggers, you can build applications that react to data modifications in DynamoDB tables.
- If you enable DynamoDB Streams on a table, you can associate the stream Amazon Resource Name (ARN) with an AWS Lambda function that you write.
- Immediately after an item in the table is modified, a new record appears in the table’s stream.
- AWS Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records.
DynamoDB Accelerator (DAX)
DAX delivers response times in microseconds for millions of requests per second for read-heavy workloads.
DAX is a fully managed service.
Use the DAX client SDK to point your existing DynamoDB API calls at the DAX cluster.
Because DAX is API-compatible with DynamoDB, you don’t have to make any functional application code changes.
Retrieval of cached data reduces the read load on existing DynamoDB tables.
This can enabled the reduction of provisioned read capacity and lower overall operational costs.
DAX lets you scale out to a 10-node cluster, giving you millions of requests per second.
- Amazon DynamoDB Time to Live (TTL) allows you to define a per-item timestamp to determine when an item is no longer needed.
- Shortly after the date and time of the specified timestamp, DynamoDB deletes the item from your table without consuming any write throughput.
- TTL is provided at no extra cost as a means to reduce stored data volumes by retaining only the items that remain current for your workload’s needs.
Aurora Global Database
Amazon Aurora Global Database is designed for globally distributed applications, allowing a single Amazon Aurora database to span multiple AWS regions.
It replicates your data with no impact on database performance, enables fast local reads with low latency in each region, and provides disaster recovery from region-wide outages.