﻿
### Slide 1:

![Slide 1](slide_1.png)

::: Notes


#### Instructor notes 

#### Student notes

:::

### Slide 2:

![Slide 2](slide_2.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 3:

![Slide 3](slide_3.png)

::: Notes

This scenario frames the networking module around two real operational challenges: latency for geographically distributed users and protection against malicious attacks. These are not independent problems — the solutions often overlap. CloudFront addresses both latency (via edge caching) and attack protection (via WAF integration), making it a useful starting point for discussing how network design affects both performance and security simultaneously.

#### Instructor notes

#### Student notes

Users from different regions around the country and globe have begun to complain about *latency issues* when accessing your Amazon Web Services (AWS) resources. You are asked to design a plan to make sure that your network is robust across different geographic regions and protected against *malicious attacks*.

:::

### Slide 4:

![Slide 4](slide_4.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 5:

![Slide 5](slide_5.png)

::: Notes

VPC networking security operates in four layered tiers: route tables, network ACLs, security groups, and host-based firewalls. Each layer provides controls at a different scope and granularity — route tables control where traffic can go at the subnet level; network ACLs control what traffic can enter or leave a subnet; security groups control what traffic reaches an instance; and host-based firewalls provide instance-level OS controls. Defense in depth requires multiple layers to be active because each has gaps the others can fill.

#### Instructor notes

#### Student notes

Securing and maintaining your AWS resources requires your attention to many different systems, including hardware, operating systems, storage, and networking. At the hardware level, AWS provides for the security of your resources. At the storage level, you are responsible for implementing encryption at rest and backup policies. At the operating system level, you must apply the necessary patches and updates needed by your resources. This module focuses on networking security and maintenance in the AWS Cloud. Networking in the AWS Cloud can be separated into two components: those inside the Amazon Virtual Private Cloud (Amazon VPC) and those outside the VPC at AWS edge network locations.

**Networking security and management layers** : The first part of this module focuses on the VPC layers of networking. Amazon VPC contains four distinct layers of networking security and management, as follows.

* **Route table** : The first and most broadly scoped layer of networking occurs at the VPC route table.
* **Network access control lists (network ACLs)** : Underneath the route table layer, a second layer of security and maintenance is provided by network ACLs. Network ACLs provide the ability to define default security behavior for your subnets in terms of IP addressing.
* **Security groups** : Amazon Elastic Compute Cloud (Amazon EC2) security groups constitute a third layer for network defense and maintenance in terms of the ports for your instances.
* **Host-based firewalls** : Host-based firewalls provide a last line of network access and control for your instances at the operating system level.

:::

### Slide 6:

![Slide 6](slide_6.png)

::: Notes

The edge network services — CloudFront, WAF, and ACM — extend your security perimeter beyond the VPC boundary to the internet edge. Moving security controls to the edge means that malicious traffic is blocked or filtered before it reaches your origin infrastructure, reducing both attack surface and cost. ACM's certificate management automation removes a common source of operational incidents: expired certificates that cause outages when renewals are missed.

#### Instructor notes

#### Student notes

AWS also provides network security and maintenance tools beyond those that operate within your VPC. These components of AWS network security and maintenance are examined in more depth in the second half of this module. The networking services that can function external to your VPC include Amazon CloudFront, AWS WAF, and AWS Certificate Manager (ACM).

**CloudFront** : CloudFront provides access to AWS edge network locations throughout the world. CloudFront serves as a fast content delivery network (CDN) to reduce the latency that users experience when interacting with AWS resources. You can integrate CloudFront with both AWS WAF and ACM to help secure network access to your cloud resources.

**AWS WAF** : AWS WAF is a web application firewall that protects against common exploits.

**ACM** : Use ACM to acquire and manage the certificates necessary for protecting data in transit through SSL/TLS. You can use ACM with AWS resources to make sure that data transmitted between your instances and these services is secure. Examples of the AWS resources include Elastic Load Balancing (ELB) load balancers and CloudFront distributions. You can also use ACM to create private certificates for your internal resources and centrally manage the certificate lifecycle.

:::

### Slide 7:

![Slide 7](slide_7.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 8:

![Slide 8](slide_8.png)

::: Notes

Amazon VPC is the foundational network isolation boundary in AWS: all other networking services build on top of it. The key design decisions in a VPC — CIDR range, subnet structure, routing, and security controls — affect both the functionality and security of everything deployed inside it. CIDR range planning is particularly important because the VPC range cannot be changed after creation; choosing a range that's too small can constrain future growth, while a range that overlaps with corporate networks prevents hybrid connectivity.

#### Instructor notes

#### Student notes

Amazon Virtual Private Cloud (Amazon VPC) is an isolated portion of the AWS Cloud that you can provision for deploying your AWS infrastructure. Amazon VPC is a virtual network, and as such, it supports multiple subnets, routing, and fine-grained security mechanisms. You have complete control over your virtual networking environment. Your control extends to the selection of your own IP address range, creation of subnets, and the configuration of route tables and network gateways. You can use both IPv4 and IPv6 in your VPC for secure and straightforward access to resources and applications. When a virtual private network (VPN) tenancy property is set to dedicated, you can launch only Dedicated Instances in it. Dedicated Instances are instances hosted on AWS hardware without any other customers' instances.

**Common use cases** : You set up typical three-tier systems (web, application, and database), where each tier resides in its own subnet. The subnet is locked down only to those TCP ports that facilitate communication between the tiers. Amazon VPC provides advanced security features, such as security groups and network ACLs, to allow inbound and outbound filtering at the instance level and subnet level. In addition, you can store data in Amazon Simple Storage Service (Amazon S3) and restrict access so that it's accessible only from instances in your VPC. You can also choose to launch *Dedicated Instances* that run on hardware dedicated to a single customer for additional isolation. You extend an on-premises corporate data center by creating a VPN connection to the VPC. Provisioned AWS resources and on-premises infrastructure can then communicate with each other as if they were part of the same network.

:::

### Slide 9:

![Slide 9](slide_9.png)

::: Notes

VPCs and subnets provide hierarchical address segmentation: the VPC defines the total address space, and subnets carve it into segments that can be placed in specific Availability Zones. Subnets are the unit of AZ placement, which makes subnet planning inseparable from availability planning. Subnets in different AZs with equivalent routing configurations enable multi-AZ deployments; making one subnet public (with internet gateway routing) and another private is the standard approach for separating internet-facing tiers from backend resources.

#### Instructor notes

#### Student notes

**VPCs** : The following are characteristics of VPCs. VPCs can span multiple Availability Zones within an AWS Region. VPCs have an implicit router and a default route table that routes local traffic within the VPC. VPCs are private networks until associated with an internet gateway and a route table rule routing traffic through it. Default VPCs are assigned a Classless Inter-Domain Routing (CIDR) range of 172.31.0.0/16. Default subnets within a default VPC are assigned /20 blocks within the VPC CIDR range. The base component of a VPC is the VPC itself. Use a CIDR address range to specify the size of its private address space.

**Subnets** : The following are common characteristics of subnets. Subnets segment VPC address ranges even further. Subnets can exist in only one Availability Zone. Subnet CIDR blocks within a VPC must not overlap. Subnet inbound and outbound traffic can be restricted using network ACLs. Subnets are used to further segment the VPC address ranges. They provide logical groupings for resources divided by team, department, visibility (public or private), or resource type. You can create 200 subnets per VPC. With sizing, the minimum size of a subnet is a /28 (or 16 IP addresses) for IPv4. Subnets cannot be larger than the VPC in which they are created. For IPv6, the subnet size is fixed to be a /64. You can allocate only one IPv6 CIDR block to a subnet.

:::

### Slide 10:

![Slide 10](slide_10.png)

::: Notes

Route tables are the traffic routing mechanism in a VPC: they determine where packets from a subnet are directed. The main route table handles subnets that aren't explicitly associated with a custom table, which makes its configuration a default that affects all unmanaged subnets — a potential security risk if the main route table includes internet gateway routes. The internet gateway is the VPC's connection to the public internet; without a route through it, a subnet is private regardless of whether its resources have public IP addresses.

#### Instructor notes

#### Student notes

Each VPC comes with an implicit router that directs traffic between resources in each subnet and out of the subnet. A route table is a mechanism used for routing traffic originating from an associated subnet in a VPC. It contains a set of rules (also called routes) that determine where traffic is sent. For more information, see "VPC and Subnets" in the Amazon VPC User Guide at https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/amazon-vpc-limits.html#vpc-limits-vpcs-subnets.

**Main route table** : When a VPC is created, a main route table is also created, which routes local traffic anywhere within the VPC IP address range. (Additional routes can be added.) Any subnet that is not explicitly associated with a custom route table is associated to the main route table. Subnets must be associated with a route table that specifies the allowed routes for outbound traffic leaving the subnet. Subnets without a route through an internet gateway are private.

Routes within a route table consist of a destination and a target. The VPC reads the route like this: "Any traffic going to destination should be routed through target." A target can be any one of the following: Specific instance ID, Elastic network interface ID, Internet gateway or a virtual private gateway, Special VPC construct that routes traffic to the internet.

Each VPC can have an internet gateway attached to it. Think of an internet gateway as a service that routes traffic to the internet. A default VPC with a public subnet will already have a route table entry to route traffic through an internet gateway. When an internet gateway has been attached to a VPC, route tables belonging to public subnets can use it to forward traffic out to the internet.

:::

### Slide 11:

![Slide 11](slide_11.png)

::: Notes

NAT gateways allow private subnet instances to initiate outbound internet connections (for software updates, API calls, and similar traffic) without accepting inbound connections from the internet. This is the standard pattern for backend instances that need internet access but should not be directly reachable from the internet. The recommendation to deploy a NAT gateway in each Availability Zone is worth emphasizing: a NAT gateway failure in one AZ that serves instances in other AZs can create unexpected cross-AZ traffic charges and single points of failure.

#### Instructor notes

#### Student notes

How can private subnets securely connect to the internet? This diagram illustrates the architecture of a VPC with a NAT gateway. The main route table sends internet traffic from the instances in the private subnet to the NAT gateway. The NAT gateway sends the traffic to the internet gateway by using the NAT gateway's Elastic IP address as the source IP address. Note: For increased availability, host a NAT gateway in each Availability Zone.

:::

### Slide 12:

![Slide 12](slide_12.png)

::: Notes

Network ACLs provide stateless, subnet-level traffic filtering. Being stateless means that return traffic must be explicitly allowed — if you allow inbound port 443, you must also allow the outbound ephemeral port range for response traffic, or connections will be established but replies will be dropped. This is a common source of misconfiguration that causes intermittent connectivity failures. Network ACLs and security groups serve complementary purposes: ACLs set default subnet behavior; security groups provide per-instance controls.

#### Instructor notes

#### Student notes

Within a VPC, security is controlled using security groups and network ACLs. Network ACLs are associated with specific subnets in a VPC. Because they are stateless, you must define both inbound and outbound rules. Define inbound and outbound rules by doing the following: specify the type of rule (for example, custom TCP); provide a rule number (rules are processed from lowest to highest); define the port range that will be allowed or denied in or out; specify the source or destination IP address or IP range that will be allowed or denied in or out. After the network ACL has been defined, you can associate it to subnets in the VPC. Network ACLs are stateless. This means that even if rules allow traffic to flow in one direction, you have to explicitly allow responses to flow in the opposite direction. Network ACLs are usually administered by Network Security or Network Administration. AWS recommends a set of network ACLs for each of the four configurations offered by the VPC wizard. Because these configurations involve supplying values that are specific to your private network, you must implement them manually. For a full list of recommended network ACLs for each VPC wizard configuration, see "Control traffic to subnets using network ACLs" (https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html#vpc-recommended-nacl-rules).

:::

### Slide 13:

![Slide 13](slide_13.png)

::: Notes

Network ACL rules are evaluated in order from lowest to highest rule number, with the first matching rule applied. This ordered evaluation means that rule numbering matters: inserting rules with numbers between existing rules allows you to refine behavior without rewriting the entire ACL. The nonmodifiable asterisk rule (implicit deny all) ensures that any traffic not explicitly allowed is blocked. Starting rule numbers in increments (100, 200, 300) rather than sequentially (1, 2, 3) is a good operational practice that leaves room to insert rules later without renumbering.

#### Instructor notes

#### Student notes

If you want to manage the network ACL

for a specific subnet, navigate to the subnet from the VPC dashboard. In
this example, the rules allow only HTTPS traffic in (inbound rule 100).
A corresponding outbound rule allows responses to that inbound traffic
(outbound rule 100). The network ACL also includes an inbound rule that
allows Remote Desktop Protocol (RDP) traffic into the subnet. The
outbound rule 120 allows responses to leave (egress) the subnet. Each
custom network ACL includes a nonmodifiable rule with a rule number
represented with an asterisk. This rule starts out the custom network
ACL as closed (no traffic is permitted). This restriction ensures that
if a packet doesn't match any of the other rules, it is denied. As a
packet comes to the subnet, it is evaluated against the inbound
(ingress) rules of the network ACL associated with the subnet. The rules
are evaluated in order, starting with the lowest-numbered rule. AWS
recommends that you start by creating rules in increments. By doing
this, you can insert new rules where you need to later. For more
information, see "Network ACL Rules" in the Amazon VPC User Guide at
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html#nacl-rules.

:::

### Slide 14:

![Slide 14](slide_14.png)

::: Notes

Security groups are the primary instance-level network access control in AWS. Their stateful behavior — automatically allowing return traffic for established connections — makes them easier to manage than network ACLs for most use cases. The key constraints to understand are the defaults: no inbound traffic is allowed by default, but all outbound traffic is allowed. The zero default inbound traffic policy is a security benefit; the permissive default outbound policy is a common oversight that should be reviewed for compliance-sensitive environments.

#### Instructor notes

#### Student notes

**Security group** : A security group is a set of firewall rules for your instances or AWS elastic network interfaces. The security group allows or blocks inbound and outbound traffic. Security groups act at the Amazon EC2 instance or elastic network interface layer, not at the subnet layer. Therefore, you could assign each instance in a subnet in your VPC to a different set of security groups. If you don't specify a particular group at launch time, the instance is automatically assigned to the default security group for the VPC. You can also change the security groups associated with any other elastic network interface on the instance. *Stateful*: If rules allow traffic to flow in one direction, responses can automatically flow in the opposite direction.

Some important things to note with security groups:

* By default, you can have 2,500 VPC security groups per Region. You can have 60 inbound and 60 outbound rules per security group (giving a total of 120 combined inbound and outbound rules).
* By default, no inbound traffic is allowed.
* By default, all outbound traffic is allowed.
* All outbound traffic in response to an inbound request is permitted.
* Instances that are part of the same security group cannot communicate with each other by default.

For more information, see "Security Groups" in the Amazon VPC User Guide at https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/amazon-vpc-limits.html#vpc-limits-security-groups.

:::

### Slide 15:

![Slide 15](slide_15.png)

::: Notes

Security group rules reflect the traffic requirements of the specific role a resource plays: web servers need inbound HTTP/HTTPS; database servers need inbound traffic only from the application tier on the database port. This role-based security group design, rather than allowing broad CIDR ranges, limits blast radius when an instance is compromised. Referencing other security groups as sources (rather than IP ranges) is a more maintainable approach in dynamic environments where instance IPs change.

#### Instructor notes

#### Student notes

When you add or remove rules, they are automatically applied to all elastic network interfaces associated with the security group. The kind of rules you add might depend on the purpose of the instance. The table displayed here describes example rules for a security group for web servers. The web servers can receive HTTP and HTTPS traffic from all IPv4 addresses, and send SQL or MySQL traffic to a database server. A database server would need a different set of rules. For example, instead of inbound HTTP and HTTPS traffic, you can add a rule that allows inbound MySQL or Microsoft SQL Server access. For more information, see "Security Group Rules for Different Use Cases" in the Amazon EC2 User Guide for Linux Instances at https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-group-rules-reference.html.

:::

### Slide 16:

![Slide 16](slide_16.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 17:

![Slide 17](slide_17.png)

::: Notes

VPC peering enables private connectivity between VPCs without routing traffic through the internet, but it comes with important limitations. Peering is non-transitive: if VPC A is peered with VPC B and VPC B is peered with VPC C, traffic from VPC A cannot reach VPC C through VPC B. This means that at scale, full mesh peering between many VPCs becomes unmanageable — the reason AWS Transit Gateway exists. Non-overlapping CIDR ranges are a prerequisite for peering, which is another argument for careful CIDR planning during VPC design.

#### Instructor notes

#### Student notes

Use VPC peering to connect two VPCs (for example, two or more VPCs run under different accounts by different departments in the same company, or VPCs belonging to two separate companies). VPC peering involves establishing a two-way connection through the AWS Management Console that both parties must initialize and accept. In addition, owners of both VPCs must establish routes that allow traffic to be sent between the networks. This routing relationship means that the two networks must not have overlapping address spaces. Possible use cases include the following: Partner company VPCs, Connections between multiple divisions. Inter-Region VPC peering is available. In which case, data between Regions traverses privately on the AWS backbone. For more information about VPC peering, see https://aws.amazon.com/about-aws/whats-new/2017/11/announcing-support-for-inter-region-vpc-peering/.

:::

### Slide 18:

![Slide 18](slide_18.png)

::: Notes

Site-to-Site VPN connects an on-premises network to a VPC over an encrypted tunnel using IPSec. The virtual private gateway is the AWS endpoint and the customer gateway is the on-premises endpoint — both must be configured for the tunnel to establish. VPN connectivity traverses the public internet, which means latency and throughput are not guaranteed. For predictable performance requirements, AWS Direct Connect provides a dedicated private connection; VPN is often used as a backup path for Direct Connect or for organizations with lower bandwidth requirements.

#### Instructor notes

#### Student notes

You can connect your VPN to a remote data center by using the following VPN connectivity options and scenario.

**Virtual private gateway** : A virtual private gateway is a gateway access point in AWS.

**Customer gateway** : On-premises hardware and software appliance. Must have routable public and static IP address. Specific number and type supported by AWS.

The configuration for this scenario includes the following: A VPC with a size /16 CIDR (example: 10.0.0.0/16). This provides 65,536 private IP addresses. A VPN-only subnet with a size /24 CIDR (example: 10.0.0.0/24). This provides 256 private IP addresses. A VPN connection between your VPC and your network. The VPN connection consists of a virtual private gateway located on the Amazon side of the VPN connection and a customer gateway located on your side of the VPN connection. Instances with private IP addresses are in the subnet range (examples: 10.0.0.5, 10.0.0.6, and 10.0.0.7), which permits the instances to communicate with each other and other instances in the VPC. The main route table contains a route that allows instances in the subnet to communicate with other instances in the VPC. Route propagation is enabled. Therefore, a route that allows instances in the subnet to communicate directly with your network appears as a propagated route in the main route table. For more information, see "What Is AWS Site-to-Site VPN?" in the AWS Site-to-Site VPN User Guide at http://docs.aws.amazon.com/AmazonVPC/latest/NetworkAdminGuide/Welcome.html.

:::

### Slide 19:

![Slide 19](slide_19.png)

::: Notes

AWS Transit Gateway solves the scaling problem of VPC peering by providing a hub-and-spoke network topology. Instead of maintaining N×(N-1)/2 peering connections between N VPCs, each VPC connects once to the Transit Gateway, which handles routing between all attached networks. This dramatically reduces connection management overhead in large multi-VPC environments. Transit Gateway also supports attachment of VPN connections and Direct Connect gateways, making it the central routing hub for hybrid network architectures.

#### Instructor notes

#### Student notes

Use AWS Transit Gateway to connect your VPCs and your on-premises networks to a single gateway. With Transit Gateway, you must create and manage only a single connection from the central gateway into each VPC, on-premises data center, or remote office across your network. Transit Gateway acts as a hub that controls how traffic is routed among all the connected networks that act like spokes.

Routing through a transit gateway operates at Layer 3, where the packets are sent to a specific next-hop attachment based on their destination IP addresses. You can configure route tables to propagate routes from the route tables for the attached VPCs and VPN connections. When a packet comes from one attachment, it is routed to another attachment using the route table that matches the destination IP address.

A transit gateway attachment is both a source and a destination of packets. You can attach the following resources to your transit gateway, if they are in the same Region as the transit gateway: one or more VPCs, one or more VPN connections. For more information, see AWS Transit Gateway at https://aws.amazon.com/transit-gateway/.

:::

### Slide 20:

![Slide 20](slide_20.png)

::: Notes

VPC endpoints allow traffic to AWS services to stay within the AWS network rather than traversing the internet, improving security and eliminating the need for internet gateways, NAT gateways, or public IP addresses for private-subnet access to services like S3 and DynamoDB. Gateway endpoints (for S3 and DynamoDB) are free; interface endpoints (for most other services) incur hourly and data processing charges. For services accessed at high volume, endpoint costs are typically far lower than the data transfer cost through a NAT gateway.

#### Instructor notes

#### Student notes

With VPC endpoints, you can privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink. You can connect without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and another service does not leave the AWS network. VPC endpoints are virtual devices. They are horizontally scaled, redundant, and highly available VPC components. VPC endpoints allow communication between instances in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic. VPC endpoints come in two types: *interface endpoints* and *gateway endpoints*. Amazon DynamoDB and Amazon S3 use gateway endpoints. The remainder of the services that can take advantage of PrivateLink use interface endpoints. By default, AWS Identity and Access Management (IAM) users do not have permission to work with endpoints. You can create an IAM user policy that grants users the permissions to create, modify, describe, and delete endpoints. For more information, see "AWS PrivateLink Concepts" in the Amazon VPC AWS PrivateLink User Guide at https://docs.aws.amazon.com/vpc/latest/privatelink/concepts.html.

:::

### Slide 21:

![Slide 21](slide_21.png)

::: Notes

Route 53 Resolver provides automatic DNS resolution for private hosted zones within a VPC, which simplifies internal service discovery without requiring a separate DNS infrastructure. The split-horizon capability — where the same domain name resolves to different addresses depending on whether the query comes from inside or outside the VPC — is particularly useful for hybrid environments where internal services use private IP addresses but the same domain name should resolve externally to public endpoints. This eliminates the need for separate internal and external naming schemes.

#### Instructor notes

#### Student notes

When you create a VPC by using Amazon

VPC, Amazon Route 53 Resolver automatically answers DNS queries for
local VPC domain names for EC2 instances
(ec2-192-0-2-44.compute-1.amazonaws.com) and records in private hosted
zones (acme.examplecorp.com). For all other domain names, Route 53
Resolver performs recursive lookups against public name servers. For
more information, see "What Is Amazon Route 53 Resolver?" in the Amazon
Route 53 Developer Guide at
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver.html.

:::

### Slide 22:

![Slide 22](slide_22.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 23:

![Slide 23](slide_23.png)

::: Notes

VPC Flow Logs provide visibility into network-level traffic patterns: accepted and rejected connections, source and destination IPs, ports, and protocols. This data is essential for security investigation (identifying unexpected connections or port scans), network troubleshooting (diagnosing overly restrictive security group rules), and compliance evidence. The important limitation is what Flow Logs don't capture: DNS queries to the VPC resolver, DHCP traffic, and traffic to the EC2 metadata service are excluded, which means some traffic remains invisible to this mechanism.

#### Instructor notes

#### Student notes

Use the VPC Flow Logs feature to capture information about the IP traffic going to and from network interfaces in your VPC. You can publish Flow Log data to Amazon CloudWatch Logs or Amazon S3. After you've created a flow log, you can retrieve and view its data in the chosen destination. You can create a flow log for a VPC, a subnet, or a network interface. If you create a flow log for a subnet or VPC, each network interface in that subnet or VPC is monitored. Flow logs do not capture all IP traffic. The following types of traffic are not logged: traffic generated by instances when they contact the Amazon DNS server (if you use your own DNS server, all traffic to that DNS server is logged); Dynamic Host Configuration Protocol (DHCP) traffic; traffic generated by a Windows instance for Amazon Windows license activation.

Flow logs help you with several tasks, such as the following: diagnosing overly restrictive security group rules; monitoring the traffic that is reaching your instance; determining the direction of the traffic to and from the network interfaces.

For more information, see "Logging IP Traffic Using VPC Flow Logs" in the Amazon VPC User Guide at https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html.

:::

### Slide 24:

![Slide 24](slide_24.png)

::: Notes

Flow log records provide structured, per-flow data that enables analysis at multiple levels: individual connection investigation, aggregate traffic pattern analysis, and security group effectiveness assessment. The `action` field (ACCEPT or REJECT) is particularly valuable for security work: a high volume of REJECTs from an unexpected source IP can indicate reconnaissance or attack activity. Custom flow log formats allow you to include only the fields you need, which reduces storage costs for high-traffic environments.

#### Instructor notes

#### Student notes

Amazon VPC Flow Logs has the following flow log record fields:

* **version** : The VPC Flow Logs version. If you use the default format, the version is 2.
* **account-id** : The AWS account ID of the owner of the source network interface for which traffic is recorded.
* **interface-id** : The ID of the network interface for which the traffic is recorded.
* **srcaddr** : The source address for incoming traffic, or the IPv4 or IPv6 address of the network interface for outgoing traffic on the network interface.
* **dstaddr** : The destination address for outgoing traffic, or the IPv4 or IPv6 address of the network interface for incoming traffic on the network interface.
* **srcport** : The source port of the traffic.
* **dstport** : The destination port of the traffic.
* **protocol** : The Internet Assigned Numbers Authority (IANA) protocol number of the traffic.
* **packets** : The number of packets transferred during the flow.
* **bytes** : The number of bytes transferred during the flow.
* **start** : The time, in Unix seconds, when the first packet of the flow was received within the aggregation interval.
* **end** : The time, in Unix seconds, when the last packet of the flow was received within the aggregation interval.
* **action** : The action that is associated with the traffic: ACCEPT or REJECT. The recorded traffic was accepted or rejected by the security groups and network ACLs.
* **log-status** : The logging status of the flow log: OK, NODATA, or SKIPDATA.

You can specify a custom format for the flow log record. For a custom format, specify which fields to return in the flow log record, and the order in which they should appear. A custom format can also help to reduce the need for separate processes to extract specific information from published flow logs. For more information, see "Flow Log Records" in the Amazon VPC User Guide at https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html#flow-log-records.

:::

### Slide 25:

![Slide 25](slide_25.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 26:

![Slide 26](slide_26.png)

::: Notes

Amazon CloudFront provides two distinct benefits that are often treated separately but work together: performance through edge caching that reduces latency for geographically distributed users, and security through integration with AWS Shield, WAF, and ACM. CloudFront is often framed as a CDN, but its role in the security architecture is equally important — edge locations absorb DDoS traffic before it reaches your origin, and WAF rules at the edge block malicious requests without consuming origin resources.

#### Instructor notes

#### Student notes

The remainder of this module covers networking security and maintenance services that can extend your network defense and monitoring capabilities beyond your VPC. The first service is Amazon CloudFront, which uses AWS edge locations. CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you're serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay).

**Security** : CloudFront provides protection against network and application layer attacks. CloudFront, AWS Shield, AWS WAF, and Route 53 work together seamlessly to create a flexible, layered security perimeter against multiple types of attacks. Attacks include network and application layer distributed denial of service (DDoS) attacks. With CloudFront, you can deliver your content, APIs, or applications through SSL/TLS. Advanced SSL features are enabled automatically. You can use ACM to create a custom SSL certificate and deploy to your CloudFront distribution.

**Integration** : CloudFront is integrated with AWS services, such as Amazon S3, Amazon EC2, Elastic Load Balancing, Route 53, and AWS Media Services. They are all accessible through the same console. You can programmatically configure all features in the CDN by using APIs or the AWS Management Console.

For more information, see "What Is Amazon CloudFront?" in the Amazon CloudFront Developer Guide at https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html.

:::

### Slide 27:

![Slide 27](slide_27.png)

::: Notes

CloudFront's global edge network of 225+ points of presence means that users in most major population centers are physically close to an edge location. This geographical proximity is what reduces latency: content cached at the edge doesn't traverse long-haul internet paths back to the origin. The regional mid-tier caches reduce the frequency of cache misses that require reaching the origin, providing a performance benefit even for content that doesn't fit in the smallest edge caches.

#### Instructor notes

#### Student notes

**CloudFront global edge network** : To deliver content to end users with lower latency, CloudFront uses a global network of 225+ points of presence (215+ edge locations and 12+ Regional mid-tier edge caches) in cities across 45+ countries.

:::

### Slide 28:

![Slide 28](slide_28.png)

::: Notes

CloudFront's request flow illustrates the cache-first principle: edge locations serve requests from cache when possible, only reaching back to the origin for cache misses. This origin offloading is critical for scalability — a high-traffic event that would overwhelm an origin server may be handled entirely from edge caches. The two-step process for cache misses (forward to origin, cache the response) means that the first request for a new object incurs full origin latency, but subsequent requests are served at edge speed.

#### Instructor notes

#### Student notes

After you configure CloudFront to deliver your content, here's what happens when users request your files: A user accesses your website or application and requests one or more files, such as an image file and an HTML file. DNS routes the request to the CloudFront edge location that can best serve the request---typically the nearest in terms of latency. At the edge location, CloudFront checks its cache for the requested files. If the files are in the cache, CloudFront returns them to the user (step 3). If the files are not in the cache, two additional steps are required before completing the transfer: CloudFront compares the request with the specifications in your distribution and forwards the request for the files to your origin server for the corresponding file type. For example, it forwards the request to your Amazon S3 bucket for image files or to your HTTP server for HTML files. The origin servers send the files back to the edge location. The edge location forwards the files to the user. For more information, see "How CloudFront Delivers Content" in the Amazon CloudFront Developer Guide at https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/HowCloudFrontWorks.html.

:::

### Slide 29:

![Slide 29](slide_29.png)

::: Notes

A CloudFront distribution specifies the origin(s) that CloudFront retrieves content from when the edge cache doesn't have the requested object. A distribution can have multiple origins for different content types — static assets from S3 and dynamic API responses from an Application Load Balancer in the same distribution. The custom headers option enables origin security by allowing the origin to verify that requests come through CloudFront rather than directly, by checking for a secret header value that only CloudFront sends.

#### Instructor notes

#### Student notes

To use CloudFront, first create a CloudFront distribution. The distribution instructs CloudFront which origin servers to get your files from when users request the files through your website or application. When you create a distribution, you provide information about one or more locations (origins) where you store the original versions of your web content. CloudFront gets your web content from your origins and serves it to viewers through a worldwide network of edge servers. Each origin is an S3 bucket or an HTTP server (for example, a web server). When you create a distribution, specify the following values for each origin:

* **Origin Domain Name** : This is the DNS domain name of the S3 bucket or HTTP server from which you want CloudFront to get objects for this origin.
* **Origin Path** : If you want CloudFront to request your content from a directory in your AWS resource or your custom origin, enter the directory path, beginning with a slash (/).
* **Origin ID** : Enter a description for the origin. This value helps to distinguish multiple origins in the same distribution from one another.
* **Origin Custom Headers** : Specify headers if you want CloudFront to add custom headers whenever it sends a request to your origin. You can use custom headers to do the following: identify requests from CloudFront; determine which requests come from a particular distribution; enable cross-origin resource sharing; control access to content.

For more information, see "Origin Settings" in the Amazon CloudFront Developer Guide at https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-values-specify.html#DownloadDistValuesOrigin.

:::

### Slide 30:

![Slide 30](slide_30.png)

::: Notes

CloudFront distribution-level settings control security (WAF web ACL, SSL certificate), identity (alternate domain names), and operational behavior (logging, default root object). The WAF web ACL association at the distribution level applies rules to all requests processed by that distribution, making it the enforcement point for application layer security. Specifying a default root object prevents exposing the directory listing of your distribution — a simple security improvement that is often overlooked.

#### Instructor notes

#### Student notes

When setting up a CloudFront distribution, specify the following values for the entire distribution:

* **AWS WAF Web ACL** : If you want to use AWS WAF to allow or block requests based on specified criteria, choose the web ACL to associate with this distribution.
* **Alternate Domain Names (CNAMEs)** : (Optional) Specify one or more domain names that you want to use for URLs for your objects. Specify to use this domain name instead of the domain name that CloudFront assigns when you create your distribution. You must own the domain name, or have authorization to use it, which you verify by adding an SSL/TLS certificate.
* **SSL Certificate** : Choose the Default CloudFront Certificate (\*.cloudfront.net) option if you want to use the CloudFront domain name in the URLs for your objects, such as https://d111111abcdef8.cloudfront.net/image1.jpg.
* **Supported HTTP Versions** : Choose the HTTP versions that you want your distribution to support when viewers communicate with CloudFront.
* **Default Root Object** : (Optional) Specify the object that you want CloudFront to request from your origin (for example, index.html) when a viewer requests the root URL of your distribution (http://www.example.com/). Specify to use this default object instead of an object in your distribution (http://www.example.com/product-description.html). Specifying a default root object avoids exposing the contents of your distribution.
* **Standard Logging** : This specifies that CloudFront logs information about each request for an object and stores the log files in an S3 bucket. You can enable or disable logging at any time.
* **Distribution State** : Indicate whether you want the distribution to be enabled or disabled after it's deployed.

For more information, see "Distribution Settings" in the Amazon CloudFront Developer Guide at https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-values-specify.html#DownloadDistValuesGeneral.

:::

### Slide 31:

![Slide 31](slide_31.png)

::: Notes

Cache behavior settings control how CloudFront handles different URL patterns within a distribution, allowing different TTLs, viewer protocol policies, and caching strategies for different content types. Static assets that rarely change can use long TTLs for maximum cache efficiency; dynamic content or authenticated responses may need TTL of 0 to bypass caching. The viewer protocol policy is a security control: requiring HTTPS prevents content from being served over unencrypted connections, protecting both data in transit and user privacy.

#### Instructor notes

#### Student notes

By taking advantage of cache behavior, you can configure a variety of CloudFront functionality for a given URL path pattern for files on your website. For example, one cache behavior might apply to all .jpg files in the images directory on a web server used as a CloudFront origin server. The following is functionality that you can configure for each cache behavior. The list of options is not exhaustive. Additional options are available.

* **Viewer Protocol Policy** : Choose the protocol policy that you want viewers to use to access your content in CloudFront edge locations.
* **Cache and origin request settings** : CloudFront cache and origin request policies provide enhanced granular control to configure headers, query strings, and cookies. These settings are used to compute the cache key or are forwarded to your origin from your CloudFront distributions. Additionally, you can configure the cache key and origin request settings independently as account-level policies that can be applied across multiple distributions.
* **Maximum Time to Live (TTL)** : Specify the maximum amount of time, in seconds, that you want objects to stay in CloudFront caches before CloudFront queries your origin to determine whether the object has been updated. The default value for Maximum TTL is 31536000 seconds (one year).
* **Default TTL** : Specify the default amount of time, in seconds, that you want objects to stay in CloudFront caches before CloudFront forwards another request to your origin to determine whether the object has been updated. The default value for Default TTL is 86400 seconds (one day).
* **Enable Real-time Logs** : Deliver logs in real time to a data stream in Amazon Kinesis Data Streams.

For more information, see "Cache Behavior Settings" in the Amazon CloudFront Developer Guide at https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-values-specify.html#DownloadDistValuesCacheBehavior.

:::

### Slide 32:

![Slide 32](slide_32.png)

::: Notes

Cache invalidation is the mechanism for removing stale content from CloudFront edges, but it should be used sparingly because it incurs costs above the monthly free limit. TTL expiration is the preferred approach for content that doesn't need immediate replacement; versioned object names (e.g., `app.v2.js`) are even better because they avoid invalidation entirely — the new URL populates fresh caches on first request. Object invalidation is appropriate for urgent corrections, such as removing incorrect or harmful content that can't wait for TTL expiration.

#### Instructor notes

#### Student notes

You can wear out cached content in three ways. The first two are preferred. TTL is the most efficient if the replacement does not need to be immediate.

**Time To Live (TTL)** : If you set the TTL for a particular origin to 0, CloudFront continues to cache content from that origin. CloudFront makes a GET request with an If-Modified-Since header. The header signals to the origin that CloudFront can continue to use the cached content if it hasn't changed at the origin.

**Update objects** : The second way requires more effort but is immediate (there might be some support for this in some content management systems). Although you can update existing objects in a CloudFront distribution and use the same object names, it is not recommended. CloudFront distributes objects to edge locations only when the objects are requested, not when you put new or updated objects in your origin. If you update an existing object in your origin with a newer version that has the same name, an edge location won't get that new version from your origin until one of the following events occur: the old version of the file in the cache expires; a user request for the file is made at that edge location.

**Invalidate objects** : You can invalidate the object at the edge location by using the console of AWS Command Line Interface (AWS CLI). You are limited in the number of invalidation requests that you can make each month without charge. For more information, see the following topics in the Amazon CloudFront Developer Guide: "Managing How Long Content Stays in the Cache (Expiration)" at https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html; "Paying for File Invalidation" at https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Invalidation.html#PayingForInvalidation.

:::

### Slide 33:

![Slide 33](slide_33.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 34:

![Slide 34](slide_34.png)

::: Notes

AWS WAF protects against application layer attacks — SQL injection, XSS, and other OWASP Top 10 vulnerabilities — that operate at the HTTP/HTTPS level and are invisible to network-level controls. Because WAF runs at the CloudFront edge, Application Load Balancer, or API Gateway, it inspects requests before they reach application code. This application-aware filtering is complementary to DDoS protection: AWS Shield handles volumetric attacks; WAF handles targeted application exploits.

#### Instructor notes

#### Student notes

AWS WAF is a web application firewall

that helps protect your web applications or APIs against common web
exploits. These exploits might affect availability, compromise security,
or consume excessive resources. AWS WAF gives you control over how
traffic reaches your applications. You can create security rules that do
the following:Block common attack patterns, such as SQL injection or
cross-site scripting (XSS).Filter out specific traffic patterns that you
define. Application layer protectionThis gives you an additional layer
of protection from web attacks that attempt to exploit vulnerabilities
in custom or third-party web applications. In addition, AWS WAF provides
real-time metrics and captures raw requests that include details about
IP addresses and geo locations. AWS WAF provides application layer
protection and is tightly integrated with CloudFront, Amazon API
Gateway, and the Application Load Balancer. These are services that AWS
customers commonly use to deliver content for their websites and
applications.
* CloudFront integrationWhen you use AWS WAF on CloudFront,
your rules run in all AWS edge locations located around the world close
to your end users. This means that security doesn't come at the expense
of performance. Blocked requests are stopped before they reach your web
servers. When you use AWS WAF on the Application Load Balancer, your
rules run in the Region and can be used to protect internet-facing and
internal load balancers.

:::

### Slide 35:

![Slide 35](slide_35.png)

::: Notes

Web ACLs are the WAF policy document: they contain ordered rules, and requests are evaluated against each rule in sequence until a match is found or the default action is applied. The default action matters: setting it to 'allow' means everything not explicitly blocked is permitted; setting it to 'block' creates a whitelist posture. Using CloudWatch metrics and request sampling to validate that rules are working as intended before enforcing them in production is an important operational practice — a misconfigured WAF rule can block legitimate traffic at scale.

#### Instructor notes

#### Student notes

Web access control lists (web ACLs) are

used to protect a set of AWS resources. You create a web ACL and define
its protection strategy by adding rules. Rules define criteria for
inspecting web requests and specify how to handle requests that match
the criteria. You set a default action for the web ACL that indicates
whether to block or allow through those requests that pass the rules
inspections. Using the AWS WAF console, you can create a web ACL and
rules to block and filter web requests. If a web ACL has more than one
rule, web requests must satisfy only one of the rules. AWS WAF evaluates
the rules in the order that they're listed in the web ACL. You can use
any one of the following methods to determine whether your rules are
working properly: Monitoring CloudWatch metrics -- Use metrics such as
AllowedRequests, BlockedRequests, or PassedRequests.Viewing sample
requests -- If you have request sampling enabled, you can view a sample
of the requests that an associated resource has forwarded to AWS WAF for
inspection. You can view which rule the request matched, and whether the
rule is configured to allow or block requests.For more information, see
"AWS Managed Rules Rule Groups List" in the AWS WAF Developer Guide at
https://docs.aws.amazon.com/waf/latest/developerguide/aws-managed-rule-groups-list.html.

:::

### Slide 36:

![Slide 36](slide_36.png)

::: Notes

WAF rules define the inspection criteria for each request: what to look at (headers, query strings, body, IP, geo location) and what to do when the criteria match (block, allow, count). Managed rule groups from AWS and Marketplace sellers provide a library of pre-built rules for common threats, reducing the time to implement a baseline security posture. Custom rules allow you to address application-specific threats and business logic that managed rules can't cover. The rule group reuse model enables a single security team to define rules that are applied consistently across multiple distributions and load balancers.

#### Instructor notes

#### Student notes

Each AWS WAF rule contains a statement

with conditions that define the inspection criteria, and an action to
take if a web request meets the criteria. When a web request meets the
criteria, that's a match. You can use rules to block matching requests
or to allow matching requests through. 
* Managed and custom rulesAWS WAF
supports both managed and custom rules. Managed rules are a set of rules
written, curated, and managed by AWS and AWS Marketplace sellers. Use
these rules to get started quickly and protect your web application or
APIs against common threats. You can use rules individually or in
reusable rule groups. AWS Marketplace managed rule groups
* are available
by subscription through AWS Marketplace. After you create your web ACL,
you can associate it with one or more AWS resources. The resource types
that you can protect using AWS WAF web ACLs are CloudFront
distributions, Amazon API Gateway APIs, and Application Load Balancers.
Managed rules can be used along with your custom AWS WAF rules. 
* Rule
groupsRule groups and web ACLs both contain rules, which are defined in
the same manner in both places. They are different in the following
ways:You can reuse a single rule group in multiple web ACLs by adding a
rule group reference statement to each web ACL. You can't reuse a web
ACL.Rule groups don't have default actions. In a web ACL, you set a
default action for each rule or rule group that you include. Each
individual rule inside a rule group or web ACL has an action defined.You
don't directly associate a rule group with an AWS resource. To protect
resources using a rule group, use the rule group in a web ACL.
* For more
information, see "AWS WAF Rules" in the AWS WAF, AWS Firewall Manager,
and AWS Shield Advanced Developer Guide at
https://docs.aws.amazon.com/waf/latest/developerguide/waf-rules.html.

:::

### Slide 37:

![Slide 37](slide_37.png)

::: Notes

AWS Shield Standard provides automatic DDoS protection at no additional charge for all AWS customers; Shield Advanced provides enhanced detection, the DRT for expert assistance, and financial protections for scaling costs during attacks. The choice between Standard and Advanced depends on your workload's risk profile and budget: Standard is sufficient for most workloads; Advanced is appropriate for internet-facing applications where a DDoS attack would cause direct business impact. Note that WAF and Shield address different attack vectors: WAF handles application-layer attacks; Shield handles volumetric network attacks.

#### Instructor notes

#### Student notes

AWS provides AWS Shield Standard and

AWS Shield Advanced for protection against DDoS attacks. AWS Shield
Standard is automatically included at no extra cost beyond what you
already pay for AWS WAF and your other AWS services. AWS Shield
AdvancedFor added protection against DDoS attacks, AWS offers AWS Shield
Advanced. AWS Shield Advanced provides enhanced DDoS attack detection
and monitoring for application-layer traffic to the following
resources:Elastic Load Balancing load balancersCloudFront
distributionsRoute 53 hosted zonesResources attached to an Elastic IP
address As an AWS Shield Advanced customer, you can contact the 24/7 AWS
DDoS Response Team (DRT) for assistance during a DDoS attack. You also
have exclusive access to advanced, real-time metrics and reports for
extensive visibility into attacks on your AWS resources. With the DRT's
assistance, AWS Shield Advanced includes intelligent DDoS attack
detection and mitigation for attacks on the network layer (Layer 3),
transport layer (Layer 4), and application layer (Layer 7).

:::

### Slide 38:

![Slide 38](slide_38.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 39:

![Slide 39](slide_39.png)

::: Notes

AWS Certificate Manager removes the manual, error-prone processes of SSL/TLS certificate procurement, deployment, and renewal. Certificates managed by ACM are automatically renewed before expiration — eliminating the operational risk of certificate-related outages from forgotten renewals. ACM's integration with ELB, CloudFront, and API Gateway means you can deploy certificates to internet-facing endpoints without ever handling the private key material, which is a significant security improvement over traditional certificate management approaches.

#### Instructor notes

#### Student notes

SSL and TLS are transport protocols

that allow for encrypted and secure communications over network
endpoints. They provide network security for data in transit. For the
endpoints in your infrastructure to be able to use these protocols, they
must have certificates installed. AWS Certificate Manager (ACM)
automates the process of deploying, updating, and managing these
certificates throughout your AWS infrastructure.Central managementYou
can centrally manage ACM SSL/TLS certificates in an AWS Region from the
AWS Management Console, AWS CLI, or ACM APIs. Private certificate
authorityAWS Private Certificate Authority (AWS Private CA) is a managed
private certificate authority (CA) service that helps you to securely
manage the lifecycle of your private certificates. AWS Private CA
provides you with a highly available private CA service without the
upfront investment and ongoing maintenance costs of operating your own
private CA or private CA hierarchy.Secure key managementACM is designed
to protect and manage the private keys used with SSL/TLS certificates.
Strong encryption and best practices for key management are used when
protecting and storing private keys.Integration with AWS servicesACM is
integrated with other AWS services. This tight integration means that
you can provision an SSL/TLS certificate and deploy it with your Elastic
Load Balancing load balancers, CloudFront distribution, or API in Amazon
API Gateway. ACM also works with AWS Elastic Beanstalk and AWS
CloudFormation for public email-validated certificates. This integration
helps you manage public certificates and use them with your applications
in the AWS Cloud.Third-party certificatesACM streamlines the import of
SSL/TLS certificates issued by third-party CAs.

:::

### Slide 40:

![Slide 40](slide_40.png)

::: Notes

ACM Private CA extends certificate management to internal resources that don't require public trust. Private certificates are used for mutual TLS (mTLS) between services, IoT device authentication, and internal HTTPS that prevents traffic sniffing within your own network. The managed CA service eliminates the operational burden of running your own certificate authority infrastructure while maintaining the CA hierarchy flexibility needed for complex enterprise trust models. The root/subordinate CA hierarchy enables administrative separation between the high-security root and the more operationally active subordinate CAs.

#### Instructor notes

#### Student notes

Private certificates are used for

identifying and securing communication between connected resources on
private networks, such as servers, mobile and Internet of Things (IoT)
devices, and applications. ACM Private CA is a managed private CA
service that helps you to securely manage the lifecycle of your private
certificates. ACM Private CA provides a highly available private CA
service without the upfront investment and ongoing maintenance costs of
operating your own private CA. AWS Private CA extends the capabilities
of ACM certificate management to private certificates. With this
extension, you can create and centrally manage public and private
certificates. Root CACA administrators can use AWS Private CA to create
a complete CA hierarchy that extends from a root CA and subordinate CA
to end-entity certificates. A CA hierarchy provides strong security and
restrictive access controls for the most trusted root CA at the top of
the trust chain, while allowing more permissive access and bulk
certificate issuance for subordinate CAs lower in the chain. A root CA
is a cryptographic building block and root of trust upon which
certificates can be issued. It is composed of a private key for signing
(issuing) certificates and a root certificate that identifies the root
CA. The root CA binds the private key to the name of the CA.Subordinate
CAsBeneath a root CA in a CA hierarchy are subordinate CAs. A
subordinate CA can do the following:Directly issue certificatesAct as an
intermediate CA that signs other subordinate CAs to create
organizational structureAct as an issuing CA that issues end-entity
certificatesAct as both an intermediate and an issuing CA

:::

### Slide 41:

![Slide 41](slide_41.png)

::: Notes

CA hierarchy planning is a security architecture exercise as much as a technical one: the depth and structure of the hierarchy reflects the organizational trust model and the risk tolerance for different use cases. A single root CA provides simplicity but means that compromising the root compromises everything; multiple root CAs provide isolation but add management overhead. The recommendation to set CloudWatch alarms on root CA activity reflects the high-value, low-frequency nature of root CA operations — any unexpected root CA activity is a significant security signal.

#### Instructor notes

#### Student notes

AWS Private CA gives you complete,

cloud-based control over your company's private public key
infrastructure (PKI). Thorough planning is essential for a PKI that is
secure, maintainable, extensible, and suited to your company's needs. By
using a CA hierarchy, you can separate administration, control, and
security policies for the root CA and subordinate CAs. You can maintain
restrictive controls and policies for your root CA while allowing more
permissive access for the subordinate CA. You delegate the bulk issuance
of end-entity certificates to subordinate CAs. When planning your CA
hierarchy, consider using a subordinate CA for the applications or
groups within your company. You can also use separate root CAs to
future-proof against corporate reorganizations, divestitures, or
acquisitions. When changes occur, an entire root CA hierarchy can move
cleanly with the division it secures.You can use CloudWatch to collect
and track metrics, set alarms, and automatically react to changes in
your AWS resources. Because the root CA requires restrictive access, you
can use CloudWatch Events to set alarms on activity involving the root
CA. Subordinate CAs do not require the same level of auditing as the
root CA. For more information, see "Planning Your AWS Private CA
Deployment" in the AWS Private Certificate Authority User Guide at
https://docs.aws.amazon.com/acm-pca/latest/userguide/PcaPlanning.html.

:::

### Slide 42:

![Slide 42](slide_42.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 43:

![Slide 43](slide_43.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 44:

![Slide 44](slide_44.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 45:

![Slide 45](slide_45.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 46:

![Slide 46](slide_46.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 47:

![Slide 47](slide_47.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 48:

![Slide 48](slide_48.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 49:

![Slide 49](slide_49.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 50:

![Slide 50](slide_50.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 51:

![Slide 51](slide_51.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 52:

![Slide 52](slide_52.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 53:

![Slide 53](slide_53.png)

::: Notes


#### Instructor notes

#### Student notes

:::
