AWS Networking & Content Delivery

Home » AWS Certification Cheat Sheets » AWS Certified Solutions Architect Professional » AWS Networking & Content Delivery

Amazon CloudFront

CloudFront Signed URL:

  • Enables access to a path with any supported origin.
  • Can filter by IP, path, date and expiration.
  • Can leverage caching.

S3 Pre-Signed URL:

  • Requests are issued as the user that pre-signed the URL.
  • Uses the IAM key of the signing IAM principal.
  • Limited lifetime.

Whitelist headers to determine which values must be unique to cause a fetch from the origin.

Best practice to separate static and dynamic content.

Can use CloudFront in front a regional API Gateway with a cache (rather than an edge API Gateway) – provides more control.

Can cache at CloudFront and API Gateway.

Cache Behavior:

A complex type that describes how CloudFront processes requests.

You must create at least as many cache behaviors (including the default cache behavior) as you have origins if you want CloudFront to serve objects from all of the origins.

Each cache behavior specifies the one origin from which you want CloudFront to get objects.

If you have two origins and only the default cache behavior, the default cache behavior will cause CloudFront to get objects from one of the origins, but the other origin is never used.

When CloudFront receives a viewer request, the requested path is compared with path patterns in the order in which cache behaviors are listed in the distribution.

Lambda@Edge

Can be used to run Lambda at Edge Locations.

Lets you run Node.js and Python Lambda functions to customize content that CloudFront delivers.

Executes the functions in AWS locations closer to the viewer.

You can use Lambda functions to change CloudFront requests and responses at the following points:

  • After CloudFront receives a request from a viewer (viewer request).
  • Before CloudFront forwards the request to the origin (origin request).
  • After CloudFront receives the response from the origin (origin response).
  • Before CloudFront forwards the response to the viewer (viewer response).

Lambda@Edge can do the following:

  • Inspect cookies and rewrite URLs to perform A/B testing.
  • Send specific objects to your users based on the User-Agent header.
  • Implement access control by looking for specific headers before passing requests to the origin.
  • Add, drop, or modify headers to direct users to different cached objects.
  • Generate new HTTP responses.
  • Cleanly support legacy URLs.
  • Modify or condense headers or URLs to improve cache utilization.
  • Make HTTP requests to other Internet resources and use the results to customize responses.

Exam tip: Lambda@Edge can be used to load different resources based on the User-Agent HTTP header.

Signed URLs and Signed Cookies

A signed URL includes additional information, for example, an expiration date and time, that gives you more control over access to your content. This additional information appears in a policy statement, which is based on either a canned policy or a custom policy.

CloudFront signed cookies allow you to control who can access your content when you don’t want to change your current URLs or when you want to provide access to multiple restricted files, for example, all of the files in the subscribers’ area of a website.

Application must authenticate user and then send three Set-Cookie headers to the viewer, the viewer stores the name-value pair and adds them to the request in a Cookie header when requesting access to content.

Use signed URLs in the following cases:

  • You want to restrict access to individual files, for example, an installation download for your application.
  • Your users are using a client (for example, a custom HTTP client) that doesn’t support cookies.

Use signed cookies in the following cases:

  • You want to provide access to multiple restricted files, for example, all of the files for a video in HLS format or all of the files in the subscribers’ area of website.
  • You don’t want to change your current URLs.

Origin Access Identity

Used in combination with signed URLs and signed cookies to restrict direct access to an S3 bucket (prevents bypassing the CloudFront controls).

An origin access identity (OAI) is a special CloudFront user that is associated with the distribution.

Permissions must then be changed on the Amazon S3 bucket to restrict access to the OAI.

If users request files directly by using Amazon S3 URLs, they’re denied access.

The origin access identity has permission to access files in your Amazon S3 bucket, but users don’t.

AWS Route 53

Route 53 Resolver

Forwarding rules apply to outbound resolvers.

You can associate the Route 53 private hosted zone in the one account with a VPC in another account.

To associate a Route 53 private hosted zone in one AWS account (Account A) with a virtual private cloud that belongs to another AWS account (Account B), follow these steps using the AWS CLI:

  1. From an instance in Account A, authorize the association between the private hosted zone in Account A and the virtual private cloud in Account B.
  2. From an instance in Account B, create the association between the private hosted zone in Account A and the virtual private cloud in Account B.
  3. Delete the association authorization after the association is created.

There are a couple of ways to provide resolution of Microsoft Active Directory Domain Controller DNS zones and AWS records:

  • Define an outbound Amazon Route 53 Resolver. Set a conditional forwarding rule for the Active Directory domain to the Active Directory servers. Configure the DNS settings in the VPC DHCP options set to use the AmazonProvidedDNS servers.
  • Configure the DHCP options set associated with the VPC to assign the IP addresses of the Domain Controllers as DNS servers. Update the DNS service on the Active Directory servers to forward all non-authoritative queries to the VPC Resolver.

AWS Transit Gateway

Used for deploying transitive peering between VPC in hub-and-spoke topology.

Massively less complex than using VPC peering when connecting many VPCs

Works within a Region.

Can use inter-Region peering to connect Transit Gateways together over the AWS global network.

Can be shared across accounts with Resource Access Manager (RAM).

Can be used with Direct Connect Gateway and VPN connections.

Configure route tables to control connectivity.

Supports IP Multicast.

AWS Direct Connect (DX)

Encrypting data sent over DX:

  • Running an AWS VPN connection over a DX connection provides consistent levels of throughput and encryption algorithms that protect your data.
  • Though a private VIF is typically used to connect to a VPC, in the case of running an IPSec VPN over the top of a DX connection it is necessary to use a public VIF

Amazon API Gateway

Endpoint types:

  • An edge-optimized API endpoint is best for geographically distributed clients. API requests are routed to the nearest CloudFront Point of Presence (POP).
  • A regional API endpoint is intended for clients in the same region. When a client running on an EC2 instance calls an API in the same region, or when an API is intended to serve a small number of clients with high demands, a regional API reduces connection overhead.
  • A private API endpoint is an API endpoint that can only be accessed from your Amazon Virtual Private Cloud (VPC) using an interface VPC endpoint, which is an endpoint network interface (ENI) that you create in your VPC.

Caching:

  • You can enable API caching in Amazon API Gateway to cache your endpoint’s responses.
  • With caching, you can reduce the number of calls made to your endpoint and also improve the latency of requests to your API.
  • When you enable caching for a stage, API Gateway caches responses from your endpoint for a specified time-to-live (TTL) period, in seconds.
  • API Gateway then responds to the request by looking up the endpoint response from the cache instead of making a request to your endpoint.
  • The default TTL value for API caching is 300 seconds. The maximum TTL value is 3600 seconds. TTL=0 means caching is disabled.

Usage Plans:

  • usage plan specifies who can access one or more deployed API stages and methods—and also how much and how fast they can access them.
  • The plan uses API keys to identify API clients and meters access to the associated API stages for each key.
  • It also lets you configure throttling limits and quota limits that are enforced on individual client API keys.
  • throttling limit is a request rate limit that is applied to each API key that you add to the usage plan. You can also set a default method-level throttling limit for an API or set throttling limits for individual API methods.
  • A quota limit is the maximum number of requests with a given API key that can be submitted within a specified time interval. You can configure individual API methods to require API key authorization based on usage plan configuration.

Integrations:

  • You choose an API integration type according to the types of integration endpoint you work with and how you want data to pass to and from the integration endpoint.
  • For a Lambda function, you can have the Lambda proxy integration, or the Lambda custom integration.
  • For an HTTP endpoint, you can have the HTTP proxy integration or the HTTP custom integration.
  • For an AWS service action, you have the AWS integration of the non-proxy type only. API Gateway also supports the mock integration, where API Gateway serves as an integration endpoint to respond to a method request.