﻿
### Slide 1:

![Slide 1](slide_1.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 2:

![Slide 2](slide_2.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 3:

![Slide 3](slide_3.png)

::: Notes

This slide sets the module scenario: Example Corp. is deploying a new application and needs a consistent, repeatable approach across business units. The underlying problem — inconsistent deployments producing uneven results — is a common organizational challenge. Before covering the tools, ask students to think about what 'consistent deployment' requires: is it a technical problem, a process problem, or both?

#### Instructor notes

#### Student notes

Example Corp. is about to roll out a new customer application. In the past, your deployments have varied across divisions and suffered uneven performance. Your manager asks you to develop a plan on how to best deploy new applications for the company's various business units. The plan should include a way to track resources through the deployment lifecycle and a way to simplify the creation of AWS accounts.

:::

### Slide 4:

![Slide 4](slide_4.png)

::: Notes

This slide outlines four requirements that shape the deployment approach: standards compliance, repeatability, trackability, and monitoring. These aren't independent — they reinforce each other. Repeatability reduces drift; trackability enables cost attribution; monitoring detects when things go wrong. Ask students which of these they find hardest to achieve in practice, and why.

#### Instructor notes

#### Student notes

Deployments must adhere to the following:

* **Abide by company standards** : *CloudOps* administrators manage and operate their company's deployments. These deployments must abide by company standards, which include compliance requirements, account creation, and other best practices.
* **Must be repeatable** : The deployments must be repeatable. This helps to reduce human error and maintain consistency in your environment.
* **Are trackable** : When you can track the components in your environment, you can organize your accounts and break down the IT spend.
* **Are monitored** : Monitoring is vital to maintaining a *healthy infrastructure*. Administrators must monitor performance, configuration changes, security concerns, and other infrastructure health indicators. A critical component is to have a mechanism to collect logs and metrics.

:::

### Slide 5:

![Slide 5](slide_5.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 6:

![Slide 6](slide_6.png)

::: Notes

Tagging is simple in principle but difficult to maintain in practice. It's easy to tag resources at creation and let the tags drift as resources are reassigned, repurposed, or forgotten. A tagging strategy is only as good as its enforcement: without automated checks, tag values become stale and unreliable, undermining the cost allocation and operations support they were meant to enable.

#### Instructor notes

#### Student notes

The following section covers how to implement a tagging strategy. Tagging is an important tool for cloud operations administrators. You can use tags to track resources within your accounts.

:::

### Slide 7:

![Slide 7](slide_7.png)

::: Notes

Tags are key-value pairs that function as metadata attached to AWS resources. Their power comes from consistent and systematic use — a single untagged resource can break a cost report, while inconsistent values make queries unreliable. Before covering the mechanics, ask students to think about what categories of tags they would need to answer their most important operational and financial questions about their environment.

#### Instructor notes

#### Student notes

You can assign metadata to your AWS resources as tags. Each tag consists of two parts: a user-defined key and a value. Use the tags to manage, search for, organize, and filter resources. You can create tags to categorize resources by purpose, owner, environment, or other criteria. For more information, see the "Best Practices for Tagging AWS Resources" AWS Whitepaper (https://docs.aws.amazon.com/whitepapers/latest/tagging-best-practices/tagging-best-practices.html).

:::

### Slide 8:

![Slide 8](slide_8.png)

::: Notes

Tags serve many different purposes simultaneously: organization, cost allocation, operations support, automation, access control, and security risk management. Each use case places different demands on your tagging strategy — cost allocation needs accurate, consistent values; automation needs machine-readable values; access control needs values that users cannot manipulate to gain unintended access. Design your tagging strategy with all use cases in mind, because inconsistency in one area tends to compromise the others.

#### Instructor notes

#### Student notes

**Tags for resource organization** : Tags are a great way to organize AWS resources in the AWS Management Console. You can search and filter by tags. By default, the console organizes your account by services, but you can customize your console by using the AWS Resource Groups. You can organize and consolidate your AWS resources based on tags. For more information about Resource Groups, see "What Are Resource Groups?" in the AWS Resource Groups User Guide (https://docs.aws.amazon.com/ARG/latest/userguide/welcome.html).

**Tags for cost allocation** : Use AWS Cost Explorer and cost and detailed billing reports to break down AWS costs by tag.

**Tags for operations support** : Tags help support some of the day-to-day operations, such as incident management, backups, restores, and operating system patching.

**Tags for automation** : Resource tags are often used to filter resources during infrastructure automation activities. You can use tags to opt into or out of automated tasks. You can also use tags to identify specific resources to archive, update, or delete.

**Tags for access control** : AWS Identity and Access Management (IAM) policies support tag-based conditions to restrict access.

**Tags for security risk management** : Use tags to identify resources that require heightened security risk-management practices. For example, Amazon Elastic Compute Cloud (Amazon EC2) instances that process sensitive or confidential information can be tagged. Then you can provide stricter compliance checks on those instances.

:::

### Slide 9:

![Slide 9](slide_9.png)

::: Notes

Developing a tagging strategy requires organizational alignment, not just technical decisions. A cross-functional team must agree on which tags are required, who owns each tag, and what valid values look like. The biggest challenge is usually organizational: different teams have different naming preferences, different concepts of what an 'environment' or 'application' means, and different levels of diligence about maintaining tags after initial creation.

#### Instructor notes

#### Student notes

**Employ a cross-functional team to identify tag requirements** : To develop a comprehensive tagging strategy, it's best to assemble a cross-functional team to identify tagging requirements. Typical stakeholders included in this cross-functional team include the following: IT finance, Information security, Application owners, Cloud automation teams, Middleware, Database (DB) administration teams, Process owners for functions such as patching, backups, monitoring, job scheduling, and disaster recovery. AWS recommends holding tagging workshops with these functional teams. By attending the workshops, representatives from different functional teams hear the perspectives of others.

**Assign owners to define tag value propositions** : Identify an owner for each tag used. The owner is responsible for articulating the value of their tags. There is an indirect cost with maintaining tags, and you should adopt only those tags that bring value.

**Focus on required and conditionally required tags** : Tags can be required, conditionally required, or optional. Focus on required and conditionally required tags. You can use optional tags if they conform to the naming and governance policies. For more information, review the following in the AWS General Reference Guide: "Tagging Naming Limits and Requirements" (https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html#tag-conventions) and "Tagging Governance" (https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html#tag-strategies-governance).

**Start small** : Start with a small set of tags that you know are necessary, and create new tags as the need arises. Use this approach instead of creating an overabundance of tags that you anticipate in the future.

**Use tags consistently** : Use a consistent approach to tagging your resources, because your operations rely on this tagging. For example, if you adopt a tagging strategy for cost allocation, and tags are not used in a consistent manner with accurate values, your cost breakdown will be skewed.

:::

### Slide 10:

![Slide 10](slide_10.png)

::: Notes

Tag naming conventions are easy to establish but hard to enforce retroactively. AWS tags are case-sensitive, which means `Environment` and `environment` are different keys that produce different results in queries and reports. The cost of inconsistent naming compounds over time as more resources are created with different conventions. Automated enforcement at resource creation — through IAM policies or CloudFormation — is more reliable than relying on human discipline.

#### Instructor notes

#### Student notes

It is important to standardize your naming convention for AWS tags. Names for AWS tags are case sensitive, so ensure that they are used consistently. AWS predefines certain tags, and some AWS services create tags automatically. Many of the tags defined by AWS are specified in lowercase and use hyphens to separate words in the name. Prefixes are used to identify the source service of the tag. The following are some examples: `lambda-console:blueprint` identifies the blueprint used as a template for an AWS Lambda function. `elasticbeanstalk:environment-name` identifies the application that created the resource. Consider writing your tags in lowercase to avoid confusing these variants: `AnyCompany:ProjectID`, `ANYCOMPANY:ProjectID`, `anycompany:projectID`, `anycompany:Projectid`. Use hyphens to separate words, and use prefixes to identify the organization. This ensures that tags are clearly identified as AWS predefined tags, third-party tags, or your company's tags. The following are some examples: `anycompany:cost-center` to identify the internal cost center code, `anycompany:environment-type` to identify whether the environment is development, test, or production, `anycompany:application-id` to identify the application that the resource was created for. For more information, review the "Best Practices for Tagging AWS Resources" AWS Whitepaper (https://docs.aws.amazon.com/whitepapers/latest/tagging-best-practices/tagging-best-practices.html).

:::

### Slide 11:

![Slide 11](slide_11.png)

::: Notes

Cost allocation tags create a direct connection between tagging discipline and financial visibility. But they are point-in-time — a tag applied after a resource runs for a month won't capture the earlier costs. The implication is that cost allocation tagging must be part of the provisioning process, not something that gets added later. Consider: what governance would you need to ensure that every new resource is tagged for cost allocation before it begins incurring charges?

#### Instructor notes

#### Student notes

Align cost allocation tags with financial reporting dimensions. You can specify tags as cost allocation tags in the AWS Billing and Cost Management console. By doing this, you can add your custom tag elements to your AWS billing data. This additional information in your billing reports helps you organize your AWS spending. Remember that cost allocation tags are point-in-time elements. They will appear in the billing data only after you have specified the tags in the Billing and Cost Management console and tagged resources with them.

:::

### Slide 12:

![Slide 12](slide_12.png)

::: Notes

Cost allocation through tags requires alignment with your organization's financial reporting structure — which varies significantly across companies. The warning against multi-value cost allocation tags reflects a real complexity: shared resources like networking or security services don't belong to a single cost center, and the workarounds tend to be fragile. Consider: what shared resources in your environment would be most difficult to attribute to specific teams or projects?

#### Instructor notes

#### Student notes

Align cost allocation tags with your financial reporting dimensions to simplify and streamline your AWS cost management. Review your current IT financial reporting practices to identify the cost allocation tags that you need. The financial reporting dimensions typically include the following: Business unit, Cost center, Product, Geographic area, Department. Use both linked accounts and cost allocation tags to help break down your AWS spending. Billing reports contain the AWS account number for billable resources. Therefore, creating different accounts for different financial entities in your company is a way to segregate costs clearly. With consolidated billing, users can link these accounts to consolidate payment. To develop your tagging strategy, you must understand your company's account structure.

**Avoid multi-value cost allocation tags** : Shared resources might require you to allocate costs to several applications, projects, or departments. A common approach to dividing this cost for these resources is to use multi-valued cost allocation tags. Using multi-value cost allocation tags creates two challenges. First, the data must be post-processed. Second, you must establish a process to maintain the tag values. If possible, consider using an existing cost-sharing or chargeback mechanism. If none exists, consider creating one.

**Tag everything** : On your billing report, you can tag most types of resources that generate costs. To get the most accurate data for your financial analysis and reporting, apply your cost allocation tags across all resource types that support tagging.

:::

### Slide 13:

![Slide 13](slide_13.png)

::: Notes

Automating tag application at resource creation is far more reliable than relying on people to tag manually. CloudFormation, Service Catalog, IAM policies, and Organizations tag policies each enforce tagging at different layers with different strengths. Using IAM to require tags at creation is particularly effective because it prevents resources from being created without required tags — but it also requires the policy to be maintained as tag requirements evolve.

#### Instructor notes

#### Student notes

Use automation to proactively tag resources. AWS offers a few tools to help you implement your tagging strategy. Some tools ensure that tags are consistently applied when resources are created.

* **AWS CloudFormation** : Use the CloudFormation Resource Tags property to apply tags to certain resource types when they are created.
* **AWS Service Catalog** : Service Catalog provides tags so that you can categorize your resources. Two types of tags are provided: AutoTags and TagOptions. AutoTags are tags that identify information about the origin of a provisioned resource in AWS Service Catalog. Service Catalog automatically applies these tags to provisioned resources. TagOptions are key-value pairs managed in AWS Service Catalog that serve as templates for creating AWS tags. With TagOptions libraries, you can specify required tags in addition to their range of allowable values.
* **IAM policies** : Include condition keys, such as `aws:RequestTag` and `aws:TagKey`, which prevent resources from being created if specific tag values are not present.
* **AWS Organizations tag policies** : Using this feature, you can define rules on how tags can be used on AWS resources in your accounts in AWS Organizations. You can use tag policies to adopt a standardized approach for tagging AWS resources.

For more information, see "Resource Tag" in the AWS CloudFormation User Guide (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-resource-tags.html), "Managing Tags in AWS Service Catalog" in the AWS Service Catalog Administrator Guide (https://docs.aws.amazon.com/servicecatalog/latest/adminguide/managing-tags.html), "Tag Policies" in the AWS Organizations User Guide (https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_tag-policies.html), and "Controlling Access to and for IAM Users and Roles Using Tags" in the AWS Identity and Access Management User Guide (https://docs.aws.amazon.com/IAM/latest/UserGuide/access_iam-tags.html).

:::

### Slide 14:

![Slide 14](slide_14.png)

::: Notes

Resource Groups give you a logical view of resources that cuts across the service-based organization of the AWS console. But a Resource Group is only as useful as the tagging discipline behind it — if resources aren't consistently tagged, group membership will be incomplete or incorrect. This reinforces the earlier point: tagging strategy and Resource Group design need to be planned together.

#### Instructor notes

#### Student notes

Use Resource Groups and Tag Editor to create, view, and manage logical groups of resources, such as the following: Applications, Specific layers of an application stack, Production or development environments. Resource groups are essential to managing, monitoring, and automating tasks on large numbers of resources at one time.

:::

### Slide 15:

![Slide 15](slide_15.png)

::: Notes

Resource Groups can be defined by tags or by CloudFormation stack membership. Tag-based groups offer flexibility but require consistent tagging; stack-based groups are more precise but only cover resources created through that stack. Each approach reflects a trade-off between flexibility and accuracy: choose based on how your team creates and manages resources, not on which method seems simpler in isolation.

#### Instructor notes

#### Student notes

AWS Resource Groups provides two general methods for defining a resource group. Both methods involve using a query to identify the members of a group. The first method relies on tags applied to AWS resources to add resources to a group. Using this method, you apply the same key-value pair tags to resources of various types in your account. Then use the AWS Resource Groups service to create a group based on that tag pair. The second method is based on resources available in an individual AWS CloudFormation stack. Using this method, choose a CloudFormation stack and then choose resource types in the stack that you want to include in the group.

:::

### Slide 16:

![Slide 16](slide_16.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 17:

![Slide 17](slide_17.png)

::: Notes

AMIs are the foundational mechanism for launching EC2 instances consistently. Understanding AMIs well — what they contain, how they're created, and how they age — is essential for managing deployment consistency. An AMI created six months ago may contain unpatched software; a shared AMI from the Marketplace may include components you haven't reviewed. Ask students: how do you know that the AMI you're launching is current and trusted?

#### Instructor notes

#### Student notes

The following section covers how to use Amazon Machine Images (AMIs) to deploy EC2 instances consistently.

:::

### Slide 18:

![Slide 18](slide_18.png)

::: Notes

An AMI is a snapshot of an instance configuration at a point in time. Multiple instances can be launched from the same AMI, enabling consistent deployments across a fleet. However, the 'same configuration' guarantee only holds at launch — instances launched from the same AMI will diverge over time as they receive updates and accumulate changes. Consider how you manage this configuration drift after instances are running.

#### Instructor notes

#### Student notes

An AMI provides the information required to launch an instance, which is a virtual server in the cloud. Specify a source AMI when you launch an instance. You can launch multiple instances from a single AMI when you need multiple instances with the same configuration. You can use different AMIs to launch instances when you need instances with different configurations.

An AMI includes the following: A template for the root volume for the instance (for example, an operating system, an application server, and applications), Launch permissions that control which AWS accounts can use the AMI to launch instances, A block device mapping that specifies the volumes to attach to the instance when it's launched. You can use AMIs provided by AWS or create your own custom AMIs. You can buy or sell AMIs through the following sources: AWS user community, AWS Marketplace. Alternatively, you can select one of your own custom AMIs and share with other AWS accounts. For more information, see "Amazon Machine Images (AMI)" in the Amazon EC2 User Guide for Linux Instances (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html).

:::

### Slide 19:

![Slide 19](slide_19.png)

::: Notes

User data enables instance configuration at launch without manual intervention — a foundation for repeatable, automated deployments. However, user data scripts run as root and their output can contain sensitive information if not handled carefully. Scripts that download from S3 or invoke the AWS CLI need the instance to have appropriate IAM permissions, and those permissions must be scoped to only what's needed for bootstrapping.

#### Instructor notes

**User data** : You can deploy EC2 instances in an automated and repeatable manner. You can use user data to achieve this. You can supply a script to a Linux or Windows instance that runs a series of commands.

**Linux and Windows scripts** : For Linux instances, scripts can take the form of shell scripts. For Windows instances, they can be batch or PowerShell scripts. Using user data, you can set up a new instance without needing to log in directly to the instance. Scripts do not have to do all the work themselves. A user data script could, for example, download and run a longer script stored in an Amazon Simple Storage Service (Amazon S3) bucket. You could also download and install a configuration management system, such as Chef or Puppet. Then, start an initialization task from a reusable Chef cookbook or Puppet module. You can also call AWS Command Line Interface (AWS CLI) commands from your scripts. Include code or instructions for installing the AWS CLI earlier in your script. For more information, see "How Can I Utilize User Data to Automatically Run a Script with Every Restart of My Amazon EC2 Linux Instance?" in the Knowledge Center (https://aws.amazon.com/premiumsupport/knowledge-center/execute-user-data-ec2). EC2Launch is a set of Windows PowerShell scripts that replace the EC2Config service on Windows Server 2016 AMIs. It performs tasks by default during the initial instance boot, including running user data. For more information, see "Configure a Windows Instance Using EC2Launch" in the Amazon EC2 User Guide for Windows Instances (https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2launch.html).
(https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2launch.html).

:::

### Slide 20:

![Slide 20](slide_20.png)

::: Notes

Base AMIs establish a consistent starting point for all instances, ensuring required software and configurations are present from launch. The trade-off is maintenance: a base AMI that isn't updated regularly becomes a source of stale or vulnerable software. Organizations must decide how frequently to rebuild AMIs and how to migrate running instances to newer images — and that process requires a plan for testing and validation before rollout.

#### Instructor notes

#### Student notes

Suppose that your company wants a base set of software included on all instances launched in its cloud. This might include the following: Home-grown utilities, In-house tools for using AWS services, Advanced software for enterprise-scale activities, such as monitoring and intrusion detection. In this circumstance, consider using the base AMI approach. Using this approach, you preconfigure all the software that your company requires on an EC2 instance, and then create an AMI from that instance. The new AMI would then become the AMI that is used to create all new instances within the company. You can then specify this AMI in launch templates that assign an instance type, network configuration, login key pair, and storage configuration.

Organizationally, you could enforce the use of the base AMI in the following ways:

* Create a custom toolset that becomes the gateway toolset for creating AWS resources, which forces creation of instances from the pool of custom AMIs.
* Create a foundational AMI with certain configuration and preinstalled software. Customize through user data and configuration software.

:::

### Slide 21:

![Slide 21](slide_21.png)

::: Notes

Creating an AMI from a running instance is straightforward, but the details matter: instances are rebooted by default to ensure consistency, which means downtime. If you disable the reboot, the AMI may not represent a fully consistent state. The resulting AMI is region-specific, which has implications for multi-region deployments — you'll need to copy AMIs to each region where they're needed.

#### Instructor notes

#### Student notes

You can use the AWS Management Console, AWS CLI, or API to create AMIs. The resulting AMI is anchored to the current Region. Instances are rebooted by default to ensure consistency. AMIs backed by Amazon Elastic Block Store (Amazon EBS) are created with all attached volumes. For more information, see "Create an Amazon EBS-Backed Linux AMI" in the Amazon EC2 User Guide for Linux Instances (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-an-ami-ebs.html).

:::

### Slide 22:

![Slide 22](slide_22.png)

::: Notes

AMI snapshots stored in S3 incur ongoing storage costs that accumulate as you create new AMI versions without deregistering old ones. AMI management requires a lifecycle policy: how many versions do you keep, how long do you retain them, and who is responsible for cleanup? Organizations often discover unexpected storage costs from AMI snapshots that were never cleaned up after the AMIs were superseded.

#### Instructor notes

#### Student notes

You can incur storage and data retrieval costs for snapshots of Amazon EBS volumes stored in Amazon S3.

:::

### Slide 23:

![Slide 23](slide_23.png)

::: Notes

Windows AMI creation has additional requirements compared to Linux: Sysprep is required to generalize the instance, and EC2Launch handles the initialization sequence. These steps reset machine-specific identifiers and configuration, which is necessary for creating AMIs that can be launched as clean instances. Skipping or incorrectly configuring Sysprep can produce AMIs that create instances with conflicts, particularly in domain-joined environments.

#### Instructor notes

#### Student notes

Creating images directly from snapshots is not supported with Windows volumes. EC2Launch supports a shutdown when using the Sysprep option. For more information, see "Configure a Windows Instance Using EC2Launch" in the Amazon EC2 User Guide for Windows Instances (https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2launch.html).

:::

### Slide 24:

![Slide 24](slide_24.png)

::: Notes

AMIs are region-specific, which means multi-region architectures require AMI copy operations to be part of the deployment pipeline. Copying AMIs is straightforward, but it requires planning: encrypted AMIs need the destination region to have access to the appropriate KMS keys, and copy operations can fail if dependent resources like AMIs or kernels aren't present in the target region. Build AMI distribution into your deployment process rather than treating it as an afterthought.

#### Instructor notes

#### Student notes

You can copy an AMI within or across an AWS Region. You can use the following tools: AWS Management Console, AWS CLI, SDKs, Amazon EC2 API. All these tools support the `CopyImage` action. You can copy both Amazon EBS-backed AMIs and instance store-backed AMIs. You can copy encrypted AMIs and AMIs with encrypted snapshots. Copy might fail if AWS cannot find a corresponding Amazon Kernel Image (AKI) in the target AWS Region. For more information, see "Copy an AMI" in the Amazon EC2 User Guide for Linux Instances (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/CopyingAMIs.html).

:::

### Slide 25:

![Slide 25](slide_25.png)

::: Notes

EC2 Image Builder automates the AMI creation and maintenance pipeline, removing the need for manual processes. The value is consistency and testability: every image goes through the same build, test, and distribution steps. The trade-off is that it introduces another service to configure and maintain. Consider whether the overhead of managing Image Builder pipelines is justified by the scale of your AMI management requirements.

#### Instructor notes

#### Student notes

EC2 Image Builder is a fully-managed AWS service that helps you create, maintain, validate, share, and deploy Linux or Windows images for use with Amazon EC2 and on premises.

Image Builder provides the following benefits:

* **Improved IT productivity** : Image Builder simplifies the process to build, maintain, and deploy secure and compliant images without the need to write and maintain automation code. Offloading the automation to Image Builder frees up resources and saves IT time.
* **Simpler to secure** : With Image Builder, you can create images with only the essential components, reducing your exposure to security vulnerabilities. You can also apply AWS security settings to further secure your images to meet internal security criteria.
* **Simple image management for both AWS and on premises** : Use Image Builder with AWS VM Import/Export (VMIE) to create and maintain the following: Golden images for Amazon EC2 (AMI), On-premises virtual machine (VM) formats: Hyper-V Virtual Hard Disk (VHDX), Stream-optimized ESX Virtual Machine Disk (VMDK), Open Virtualization Format (OVF).
* **Built-in validation support** : Validate your images with AWS tests and your own tests before using them in production. You can set policies that deploy the images to specific AWS Regions only after they pass tests that you specify.
* **Centralized policy enforcement** : Image Builder provides version control for efficient revision management. It integrates with AWS Resource Access Manager (AWS RAM) and AWS Organizations so that you can use automation scripts, recipes, and images across AWS accounts. With Image Builder, information security and IT teams can better enforce policies and compliance on images.

For more information, see EC2 Image Builder (https://aws.amazon.com/image-builder/) and VM Import/Export (https://aws.amazon.com/ec2/vm-import/).

:::

### Slide 26:

![Slide 26](slide_26.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 27:

![Slide 27](slide_27.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 28:

![Slide 28](slide_28.png)

::: Notes

AWS Control Tower provides a structured approach to setting up a multi-account environment, but it imposes a specific landing zone structure that may not match every organization's needs. The pre-packaged controls are useful starting points, but they represent AWS's assumptions about governance — not necessarily your organization's. Consider how much flexibility you need in your account structure before committing to a Control Tower landing zone.

#### Instructor notes

#### Student notes

AWS Control Tower offers a streamlined way to set up and govern a new, secure, multi-account AWS environment. It establishes a landing zone that is based on best-practice blueprints and facilitates governance by using controls that you can choose from a prepackaged list. A control is a high-level rule that provides ongoing governance for your overall AWS environment. The landing zone is a well-architected, multi-account baseline that follows AWS best practices. Controls implement governance rules for security, compliance, and operations. Automate your account provisioning workflow with an account factory. Get dashboard visibility into your organizational units, accounts, and controls.

:::

### Slide 29:

![Slide 29](slide_29.png)

::: Notes

The Control Tower landing zone structure imposes specific account roles: the management account controls billing and provisioning; the log archive account centralizes audit data; the audit account provides read access for security teams. This separation is deliberate — but it also means that moving away from this structure later is difficult. Ask students to consider how well this structure would match Example Corp.'s organizational model and what compromises it might require.

#### Instructor notes

#### Student notes

Your landing zone is a well-architected, multi-account environment for all of your AWS resources. You can use this environment to enforce compliance regulations on all your AWS accounts.

**Structure of an AWS Control Tower landing zone:**

* **Management account/root** : This is the AWS Organizations parent account that contains all other organizational units (OUs) in your landing zone. This is the account that you created specifically for your landing zone. This account is used for billing for everything in your landing zone. It's also used for Account Factory provisioning of accounts, in addition to managing OUs and controls.
* **Core OU** : This OU contains the log archive and audit member accounts. These accounts are often referred to as shared accounts. The log archive account works as a repository for logs of API activities and resource configurations from all accounts in the landing zone. The audit account is a restricted account. It is designed to give your security and compliance teams read and write access to all accounts in your landing zone. From the audit account, you have programmatic access to review accounts by means of a role that is granted to Lambda functions only. The audit account does not allow you to log in to other accounts manually.
* **Custom OU** : The custom OU is created when you launch your landing zone. This and other member OUs contain the member accounts that your users work with to perform their AWS workloads.
* **AWS IAM Identity Center** : This directory houses your IAM Identify Center users. It defines the scope of permissions for each IAM Identify Center user.

:::

### Slide 30:

![Slide 30](slide_30.png)

::: Notes

Control Tower controls operate at three points in the resource lifecycle: preventive controls block non-compliant actions before they happen; detective controls identify non-compliance after the fact; proactive controls scan before provisioning through CloudFormation hooks. Each type has different latency and different failure modes. Preventive controls can block legitimate work if misconfigured; detective controls allow drift to exist temporarily. Ask students: what governance approach would you choose for a high-risk configuration like public S3 buckets?

#### Instructor notes

#### Student notes

A control is a high-level rule that provides ongoing governance for your overall AWS environment. It's expressed in plain language. Through controls, AWS Control Tower implements preventive or detective controls that help you govern your resources and monitor compliance across groups of AWS accounts. A control applies to an entire organizational unit (OU), and every AWS account within the OU is affected by the control. Therefore, when users work in any AWS account in your landing zone, they're subject to the controls that govern their account's OU. Use controls to express your policy intentions.

Types of controls:

* **Prevention** : A preventive control ensures that your accounts maintain compliance, because it disallows actions that lead to policy violations. The status of a preventive control is either enforced or not enabled. Preventive controls are supported in all AWS Regions. The preventive controls are implemented using service control policies (SCPs), which are part of AWS Organizations.
* **Detection** : A detective control detects noncompliance of resources in your accounts, such as policy violations, and provides alerts through the dashboard. The status of a detective control is either clear, in violation, or not enabled. Detective controls apply only in those AWS Regions supported by AWS Control Tower. The detective controls are implemented using AWS Config rules and AWS Lambda functions.
* **Proactive** : A proactive control scans your resources before they are provisioned, and makes sure that the resources are compliant with that control. Resources that are not compliant will not be provisioned. Proactive controls are implemented by means of AWS CloudFormation hooks, and they apply to resources that would be provisioned by AWS CloudFormation. The status of a proactive control is PASS, FAIL, or SKIP.

:::

### Slide 31:

![Slide 31](slide_31.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 32:

![Slide 32](slide_32.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 33:

![Slide 33](slide_33.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 34:

![Slide 34](slide_34.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 35:

![Slide 35](slide_35.png)

::: Notes


#### Instructor notes

#### Student notes

For more information, see "Retrieve Instance Metadata" in the Amazon EC2 User Guide for Linux Instances (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html).

:::

### Slide 36:

![Slide 36](slide_36.png)

::: Notes

Instance metadata and user data are accessible from within the instance via a link-local address, which means they're reachable by any process running on that instance — including malicious ones. The IMDSv2 protocol adds a session-oriented token requirement to mitigate server-side request forgery attacks that could expose credentials. When retrieving metadata from scripts, ensure that your approach uses IMDSv2 rather than the legacy IMDSv1.

#### Instructor notes

#### Student notes

In PowerShell you can use the following:

```powershell
$x = Invoke-WebRequest `
    -URI http://169.254.169.254/latest/user-data `
    -UseBasicParsing
[System.Text.Encoding]::Ascii.GetString($x.content)
```

For more information, see "Retrieve Instance Metadata" in the Amazon EC2 User Guide for Linux Instances (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html).

:::

### Slide 37:

![Slide 37](slide_37.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 38:

![Slide 38](slide_38.png)

::: Notes


#### Instructor notes

#### Student notes

:::
