﻿
### Slide 1:

![Slide 1](slide_1.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 2:

![Slide 2](slide_2.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 3:

![Slide 3](slide_3.png)

::: Notes

This scenario is shared with Module 12: optimize storage costs through data classification and lifecycle management. Module 13 shifts focus from block storage to object storage, where the cost optimization levers are different — storage class selection and lifecycle policies rather than volume type and size choices. Ask students to think about what data in their organizations would be appropriate for object storage versus block storage and what access frequency characteristics would determine which S3 storage class to use.
This scenario is shared with Module 12: optimize storage costs through data classification and lifecycle management. Module 13 shifts focus from block storage to object storage, where the cost optimization levers are different — storage class selection and lifecycle policies rather than volume type and size choices. Ask students to think about what data in their organizations would be appropriate for object storage versus block storage and what access frequency characteristics would determine which S3 storage class to use.

#### Instructor notes

#### Student notes

As costs for cloud resources climb, you are asked to develop a plan to *optimize data storage* and data lifecycle management. This plan must include classifying data. You also need to establish lifecycle storage rules according to the data classification and plan for backup and recovery scenarios.

:::

### Slide 4:

![Slide 4](slide_4.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 5:

![Slide 5](slide_5.png)

::: Notes
The key distinction between EBS and S3 is the data model and access pattern. EBS provides block-level storage for data that changes frequently and is accessed by a running compute process; S3 provides object storage for data that is primarily read, downloaded, or shared. S3's durability and availability come from automatic multi-AZ replication — you don't need to configure snapshots or replication policies to get this baseline protection. This makes S3 appropriate for any data that can be stored as objects: files, backups, logs, static website content, and data lake assets.

The key distinction between EBS and S3 is the data model and access pattern. EBS provides block-level storage for data that changes frequently and is accessed by a running compute process; S3 provides object storage for data that is primarily read, downloaded, or shared. S3's durability and availability come from automatic multi-AZ replication — you don't need to configure snapshots or replication policies to get this baseline protection. This makes S3 appropriate for any data that can be stored as objects: files, backups, logs, static website content, and data lake assets.

#### Instructor notes

#### Student notes

In Module 12, you were primarily concerned with the storage used by your Amazon Elastic Compute Cloud (Amazon EC2) compute instances and the day-to-day operational issues. These issues are related to ensuring that the instances have the necessary storage performance capacity and data-protection safeguards to ensure smooth compute functionality. The data stored, accessed, and processed locally by your EC2 instances is usually data that is undergoing frequent changes. For the rest of this module, we will focus on more long-term, lower-cost storage options, which are provided by Amazon Simple Storage Service (Amazon S3). Amazon S3 provides highly available and durable storage of data that is primarily static. This is not data that is undergoing frequent changes due to the requirements of your applications. Instead, it is data that many users or applications might need to retrieve, download, and process locally. Amazon S3 and Amazon Simple Storage Service Glacier (Amazon S3 Glacier) also provide low-cost storage solutions for storing snapshots and other data backup and recovery solutions.

:::

### Slide 6:

![Slide 6](slide_6.png)

S3 replication enables additional geographic distribution of data beyond the default multi-AZ redundancy within a region. Cross-Region Replication (CRR) is commonly used for disaster recovery (data in a second region survives a regional failure), compliance (data residency requirements for specific geographies), and latency optimization (placing a replica closer to users in another region). Same-Region Replication (SRR) is used for data segregation, such as maintaining separate buckets for production and audit purposes, or consolidating logs from multiple accounts.
::: Notes

S3 replication enables additional geographic distribution of data beyond the default multi-AZ redundancy within a region. Cross-Region Replication (CRR) is commonly used for disaster recovery (data in a second region survives a regional failure), compliance (data residency requirements for specific geographies), and latency optimization (placing a replica closer to users in another region). Same-Region Replication (SRR) is used for data segregation, such as maintaining separate buckets for production and audit purposes, or consolidating logs from multiple accounts.

#### Instructor notes

#### Student notes

For Amazon Elastic Block Store (Amazon EBS) volumes, you are responsible for setting up snapshots to replicate your Amazon EC2 data. By contrast, Amazon S3 automatically copies your data to other Amazon Web Services (AWS) Availability Zones in an AWS Region without any intervention from you. You can set up replication rules that create additional copies of your data to ensure increased availability and accessibility. You can automate the copying of an S3 bucket's contents to a secondary bucket in the same Region. You can also copy its contents to a different Region, where the data will again be replicated across the Availability Zones in that Region. You can use S3 Cross-Region Replication (CRR) to increase fault tolerance and reduce latency for users in other Regions. You can set up replication rules through the Amazon S3 console, as shown here. After the replication rule is in place, no further action on your part is needed. Every time a new object is placed in the S3 bucket, it is replicated according to the rule. For more information, see "When to Use Cross-Region Replication" in the Amazon Simple Storage Service User Guide at https://docs.aws.amazon.com/AmazonS3/latest/dev/replication.html#crr-scenario.

:::

### Slide 7:

![Slide 7](slide_7.png)
S3 Block Public Access is a guardrail that prevents inadvertent public exposure of S3 data — one of the most common causes of data breaches in cloud environments. Enabling Block Public Access at the account level prevents any bucket in the account from being made public, regardless of bucket policies or ACLs. This is a recommended baseline security control that should be applied to all accounts unless there is a specific, reviewed use case that requires public buckets. The four individual settings provide granular control for accounts that need to selectively permit public access for specific use cases.

::: Notes

S3 Block Public Access is a guardrail that prevents inadvertent public exposure of S3 data — one of the most common causes of data breaches in cloud environments. Enabling Block Public Access at the account level prevents any bucket in the account from being made public, regardless of bucket policies or ACLs. This is a recommended baseline security control that should be applied to all accounts unless there is a specific, reviewed use case that requires public buckets. The four individual settings provide granular control for accounts that need to selectively permit public access for specific use cases.

#### Instructor notes

#### Student notes

S3 Block Public Access prevents the application of any settings that allow public access to data in S3 buckets. You can configure S3 Block Public Access settings for an individual S3 bucket or for all the buckets in your account. **Block all public access** : Turning this setting on is the same as turning on all of the following four settings. Each of the settings is independent from the another.

* **Block public access to buckets and objects granted through new access control lists (ACLs)** : Amazon S3 blocks public access permissions applied to newly added buckets or objects. It prevents the creation of new public ACLs for existing buckets and objects. The setting does not modify any existing permissions that allow public access to Amazon S3 resources using ACLs.
* **Block public access to buckets and objects granted through any access control lists (ACLs)** : Amazon S3 ignores all ACLs that grant public access to buckets and objects. This setting overrides any current or future public access settings for current and future objects in the bucket.
* **Block public access to buckets and objects granted through new public bucket or access point policies** : Amazon S3 blocks new bucket and access point policies that grant public access to buckets and objects. This setting does not modify any existing policies that allow public access to Amazon S3 resources.
* **Block public and cross-account access to buckets and objects through any public bucket or access point policies** : Amazon S3 ignores public and cross-account access for buckets or access points with policies that grant public access to buckets and objects.

For information, see "Blocking Public Access to Your Amazon S3 Storage" in the Amazon Simple Storage Service User Guide at https://docs.aws.amazon.com/AmazonS3/latest/dev/access-control-block-public-access.html.

:::

### Slide 8:

Access Analyzer for S3 provides continuous monitoring of bucket access configurations and alerts you when a bucket is accessible from outside your organization or the internet. This is particularly valuable for detecting bucket misconfiguration that may have been introduced gradually through policy changes — a bucket that was never intended to be public can become public through cumulative policy edits. The findings include the specific policy or ACL that grants the access, enabling precise remediation rather than broad policy replacement.
![Slide 8](slide_8.png)

::: Notes

Access Analyzer for S3 provides continuous monitoring of bucket access configurations and alerts you when a bucket is accessible from outside your organization or the internet. This is particularly valuable for detecting bucket misconfiguration that may have been introduced gradually through policy changes — a bucket that was never intended to be public can become public through cumulative policy edits. The findings include the specific policy or ACL that grants the access, enabling precise remediation rather than broad policy replacement.

#### Instructor notes

#### Student notes

Access Analyzer for S3 alerts you to S3

buckets that allow access to anyone on the internet or other AWS
accounts, including AWS accounts outside of your organization. For each
public or shared bucket, you receive findings into the source and level
of public or shared access. For example, Access Analyzer for S3 might
show that a bucket has read or write access provided through a bucket
ACL, a bucket policy, or an access point policy. Armed with this
knowledge, you can take immediate and precise corrective action to
restore your bucket access to what you intended. For more information,
see "Reviewing Bucket Access Using IAM Access Analyzer for S3" in the
Amazon Simple Storage Service User Guide at
https://docs.aws.amazon.com/AmazonS3/latest/user-guide/access-analyzer.html.

:::

### Slide 9:
S3 server access logging records detailed information about each request to a bucket, including the requester, operation, object key, and response code. This data is essential for security auditing, billing analysis, and understanding usage patterns. Important: access logs are stored in a separate S3 bucket — logging a bucket to itself is not supported. Log analysis at scale is typically done using Amazon Athena, which can run SQL queries directly against the log files stored in S3 without requiring a database or data loading step.

![Slide 9](slide_9.png)

::: Notes

S3 server access logging records detailed information about each request to a bucket, including the requester, operation, object key, and response code. This data is essential for security auditing, billing analysis, and understanding usage patterns. Important: access logs are stored in a separate S3 bucket — logging a bucket to itself is not supported. Log analysis at scale is typically done using Amazon Athena, which can run SQL queries directly against the log files stored in S3 without requiring a database or data loading step.

#### Instructor notes

#### Student notes

Server access logging provides detailed

records for the requests that are made to a bucket. S3 Access Logs are
useful for many applications. For example, access log information can be
useful in security and access audits. It can also help you learn about
your customer base and understand your Amazon S3 bill. For more
information, see "Amazon S3 Server Access Log Format" in the Amazon
Simple Storage Service User Guide at
https://docs.aws.amazon.com/AmazonS3/latest/dev/LogFormat.html.

:::

S3 Inventory provides scheduled reports on the objects and metadata in a bucket, including replication status and encryption status. For compliance purposes, being able to prove that all objects in a bucket are encrypted and replicated is a valuable audit capability. Inventory reports are delivered to an S3 bucket as CSV, ORC, or Parquet files, making them directly queryable with Athena for large-scale analysis of bucket contents.
### Slide 10:

![Slide 10](slide_10.png)

::: Notes

S3 Inventory provides scheduled reports on the objects and metadata in a bucket, including replication status and encryption status. For compliance purposes, being able to prove that all objects in a bucket are encrypted and replicated is a valuable audit capability. Inventory reports are delivered to an S3 bucket as CSV, ORC, or Parquet files, making them directly queryable with Athena for large-scale analysis of bucket contents.

#### Instructor notes

#### Student notes

Use Amazon S3 Inventory to automate report generation that details the contents and status of objects in an S3 bucket. You can use S3 inventory to audit and report on the replication and encryption status of your objects for business, compliance, and regulatory needs.

:::
S3 Event Notifications enable event-driven architectures that respond automatically to changes in S3 data. This is the foundation for serverless data processing pipelines: when an object is uploaded to S3, a Lambda function is triggered to process it immediately. The prefix and suffix filtering allows you to route different types of uploads to different processing workflows — for example, triggering different Lambda functions for .jpg uploads versus .pdf uploads in the same bucket. EventBridge integration provides more sophisticated routing capabilities than direct SNS/SQS/Lambda integration.

### Slide 11:

![Slide 11](slide_11.png)

::: Notes

S3 Event Notifications enable event-driven architectures that respond automatically to changes in S3 data. This is the foundation for serverless data processing pipelines: when an object is uploaded to S3, a Lambda function is triggered to process it immediately. The prefix and suffix filtering allows you to route different types of uploads to different processing workflows — for example, triggering different Lambda functions for .jpg uploads versus .pdf uploads in the same bucket. EventBridge integration provides more sophisticated routing capabilities than direct SNS/SQS/Lambda integration.

#### Instructor notes

#### Student notes

You can use the Amazon S3 Event Notifications feature to receive notifications when certain events happen in your S3 bucket. Currently, Amazon S3 can publish notifications for the following events: new object created events, object removal events, restore object events, Reduced Redundancy Storage (RRS) object lost events, replication events, S3 Lifecycle expiration events, S3 Lifecycle transition events, S3 Intelligent-Tiering automatic archival events, object tagging events, object ACL PUT events.

Amazon S3 can send event notification messages to the following destinations: Amazon Simple Notification Service (Amazon SNS) topics, Amazon Simple Queue Service (Amazon SQS) queues, AWS Lambda function, Amazon EventBridge.

When configuring these notifications, you can also use prefix and suffix filters to opt in to event notifications based on object name. For example, you can choose to receive DELETE notifications for the images/ prefix and the .png suffix in a particular bucket. To use this feature with your application, follow these steps: (1) Create the queue, topic, or Lambda function (which we will call target for brevity), if necessary. (2) Grant Amazon S3 permission to publish to the target or invoke the Lambda function. For Amazon SNS or Amazon SQS, you do this by applying an appropriate policy to the topic or the queue. For Lambda, you must create and supply an IAM role, then associate it with the Lambda function. (3) Arrange for your application to be invoked in response to activity on the target. (4) Set the bucket's Notification Configuration to point to the target.

For more information, see "Amazon S3 Event Notifications" in the Amazon Simple Storage Service User Guide at https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html.

S3 Versioning provides protection against two of the most common data loss patterns: accidental overwrites (a new upload doesn't destroy the previous version) and accidental deletes (a delete creates a delete marker, but the object versions remain retrievable). Once enabled, versioning cannot be fully disabled — only suspended — which means existing versions are always preserved. The operational implication is that versioning increases storage costs because all versions accumulate; lifecycle policies should be configured to expire old versions after a retention period.
:::

### Slide 12:

![Slide 12](slide_12.png)

::: Notes

S3 Versioning provides protection against two of the most common data loss patterns: accidental overwrites (a new upload doesn't destroy the previous version) and accidental deletes (a delete creates a delete marker, but the object versions remain retrievable). Once enabled, versioning cannot be fully disabled — only suspended — which means existing versions are always preserved. The operational implication is that versioning increases storage costs because all versions accumulate; lifecycle policies should be configured to expire old versions after a retention period.

#### Instructor notes

#### Student notes

S3 Versioning protects from accidental overwrites and deletes with no performance penalty. Versioning includes the following features: generates a new version with every upload; deletes are logical deletes only (objects are not removed from the bucket); allows retrieval of deleted objects or a rollback to previous versions.

An S3 bucket has three states as follows:

* **Default no versioning** : Upload an object to a bucket with the same key. It overwrites the earlier version. Delete an object, and it is permanently deleted.
* **Versioning enabled** : Upload an object with the same key. Amazon S3 keeps the earlier object and creates a new object with a new version ID. Delete an object, and Amazon S3 logically deletes it and adds a marker. However, the earlier version is retrievable by its version ID.
* **Versioning suspended** : Versions of objects are maintained, but the bucket temporarily behaves as it does when the bucket version is disabled. When enabled, you cannot disable it. You can only suspend it. All versions created previously remain. Buckets with versioning suspended behave like buckets that do not have versioning enabled.
S3 Object Lock implements WORM (write-once-read-many) protection for compliance scenarios where data must be preserved unmodified for a defined retention period. The two retention modes — compliance and governance — provide different levels of protection. Compliance mode prevents any user, including the root account, from modifying or deleting the locked object; governance mode allows users with special permissions to override the lock. Compliance mode is appropriate for regulatory requirements (SEC Rule 17a-4, for example) where lock bypass must be impossible.

:::

### Slide 13:

![Slide 13](slide_13.png)

::: Notes

S3 Object Lock implements WORM (write-once-read-many) protection for compliance scenarios where data must be preserved unmodified for a defined retention period. The two retention modes — compliance and governance — provide different levels of protection. Compliance mode prevents any user, including the root account, from modifying or deleting the locked object; governance mode allows users with special permissions to override the lock. Compliance mode is appropriate for regulatory requirements (SEC Rule 17a-4, for example) where lock bypass must be impossible.

#### Instructor notes

#### Student notes

With S3 Object Lock, you can store objects using the write-once-read-many (WORM) model. Using S3 Object Lock, you can prevent an object from being deleted or overwritten for a fixed amount of time or indefinitely. With S3 Object Lock, you can meet regulatory requirements that require WORM storage or add an additional layer of protection against object changes and deletion. S3 Object Lock provides two ways to manage object retention: retention periods and legal holds. They are defined as follows:

* **Retention period** : Specifies a fixed period of time during which an object remains locked. During this period, your object will be WORM-protected and can't be overwritten or deleted.
* **Legal hold** : Provides the same protection as a retention period but has no expiration date. Instead, a legal hold remains in place until you explicitly remove it.

**Retention modes** : S3 Object Lock provides two retention modes: governance and compliance. You can apply either retention mode to any object version that is protected by S3 Object Lock. The modes are defined as follows:

* **Compliance mode** : A protected object version cannot be overwritten or deleted by any user, including the root user in your AWS account. When an object is locked in compliance mode, its retention mode cannot be changed, and its retention period cannot be shortened.
* **Governance mode** : Users cannot overwrite or delete an object version or alter its lock settings unless they have special permissions. In governance mode, you can protect objects against deletion by most users. However, you can continue to grant some users permission to alter the retention settings or delete the object if necessary.

:::

### Slide 14:

![Slide 14](slide_14.png)

::: Notes


#### Instructor notes

S3's range of storage classes maps to a spectrum of access frequency and retrieval time requirements: S3 Standard for frequently accessed data, through increasingly lower-cost tiers for less frequently accessed data, to the ultra-low-cost Glacier deep archive. Each tier has different cost structures (storage price, retrieval price, minimum storage duration) that affect total cost of ownership. Choosing the wrong storage class — storing rarely accessed data in S3 Standard, or storing frequently accessed data in S3 Glacier — increases costs unnecessarily. S3 Intelligent-Tiering automates this classification when access patterns are unpredictable.
#### Student notes

:::

### Slide 15:

![Slide 15](slide_15.png)

::: Notes

S3's range of storage classes maps to a spectrum of access frequency and retrieval time requirements: S3 Standard for frequently accessed data, through increasingly lower-cost tiers for less frequently accessed data, to the ultra-low-cost Glacier deep archive. Each tier has different cost structures (storage price, retrieval price, minimum storage duration) that affect total cost of ownership. Choosing the wrong storage class — storing rarely accessed data in S3 Standard, or storing frequently accessed data in S3 Glacier — increases costs unnecessarily. S3 Intelligent-Tiering automates this classification when access patterns are unpredictable.

#### Instructor notes

#### Student notes

Amazon S3 offers a range of storage classes designed for different use cases. These include the following:

* *S3 Standard* for general-purpose storage of frequently accessed data.
* S3 Standard-Infrequent Access (S3 Standard-IA) for long-lived, but less frequently accessed data.
* S3 One Zone-Infrequent Access (S3 One Zone-IA) for long-lived, less frequently accessed data that can be stored in a single Availability Zone.
* S3 Glacier Instant Retrieval for archive data that is rarely accessed but requires a restore in milliseconds.
* S3 Glacier Flexible Retrieval for the most flexible retrieval options that balance cost with access times ranging from minutes to hours. Your retrieval options permit you to access all the archives you need, when you need them, for one low storage price. This storage class comes with multiple retrieval options: expedited retrievals (restore in 1--5 minutes); standard retrievals (restore in 3--5 hours); bulk retrievals (restore in 5--12 hours; bulk retrievals are available at no additional charge).
* S3 Glacier Deep Archive for long-term cold storage archive and digital preservation. Your objects can be restored in 12 hours or less.
* *S3 Intelligent-Tiering* is an additional storage class that provides flexibility for data with unknown or changing access patterns. It automates the movement of your objects between storage classes to optimize cost.
S3 Intelligent-Tiering removes the need to predict access patterns by automatically moving objects between access tiers based on actual usage. The key design consideration is the per-object monitoring fee: small objects (below 128 KB) are not eligible for automatic tiering and incur the monitoring fee without potential savings, so Intelligent-Tiering is best suited for larger objects with unpredictable access. The archive access tiers in Intelligent-Tiering are opt-in; without enabling them, objects only move between the Frequent and Infrequent Access tiers.

Amazon S3 also offers capabilities to manage your data throughout its lifecycle. When an *S3 Lifecycle* policy is set, your data automatically transfers to a different storage class without any changes to your application. For more information, see each of the following resources: "Amazon S3 Storage Classes" at https://aws.amazon.com/s3/storage-classes; "Performance Across the S3 Storage Classes" at https://aws.amazon.com/s3/storage-classes/#Performance_across_the_S3_Storage_Classes.

:::

### Slide 16:

![Slide 16](slide_16.png)

::: Notes

S3 Intelligent-Tiering removes the need to predict access patterns by automatically moving objects between access tiers based on actual usage. The key design consideration is the per-object monitoring fee: small objects (below 128 KB) are not eligible for automatic tiering and incur the monitoring fee without potential savings, so Intelligent-Tiering is best suited for larger objects with unpredictable access. The archive access tiers in Intelligent-Tiering are opt-in; without enabling them, objects only move between the Frequent and Infrequent Access tiers.

#### Instructor notes

#### Student notes

The S3 Intelligent-Tiering storage class is designed to optimize cost savings by automatically moving data to the most cost-effective access tier when access patterns change.

S3 Express One Zone addresses a specific gap in the S3 storage class lineup: workloads that need object storage semantics with low latency access comparable to local NVMe storage. Standard S3 is optimized for durability and throughput, not for single-digit millisecond first-byte latency. S3 Express One Zone sacrifices cross-AZ redundancy to achieve this latency, making co-location of compute and storage in the same AZ essential. Its incompatibility with other S3 storage classes and features (Intelligent-Tiering, Lifecycle policies) means it is a purpose-specific solution, not a general-purpose S3 replacement.
**Criteria for moving objects** : Objects uploaded or transitioned to S3 Intelligent-Tiering are automatically stored in the Frequent Access tier. S3 Intelligent-Tiering works by monitoring access patterns and moving the objects that have not been accessed in 30 consecutive days to the Infrequent Access tier. After 90 consecutive days of no access, objects are moved to the Archive Instant Access tier. If you activate one or both asynchronous archive access tiers, S3 Intelligent-Tiering automatically moves objects that haven't been accessed for 90 consecutive days to the Archive Access tier. After 180 consecutive days of no access, objects are moved to the Deep Archive Access tier. If the objects are accessed later, S3 Intelligent-Tiering moves the objects back to the Frequent Access tier.

No retrieval fees are incurred when using the S3 Intelligent-Tiering storage class. No additional tiering fees are incurred when objects are moved between access tiers. It is the designed to be the ideal storage class for long-lived data with access patterns that are unknown or unpredictable. You can configure Amazon S3 storage classes at the object level. A bucket can contain objects stored in S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA. You can upload objects directly to S3 Intelligent-Tiering or use S3 Lifecycle policies to transfer objects from S3 Standard and S3 Standard-IA to S3 Intelligent-Tiering. You can also archive objects from S3 Intelligent-Tiering to S3 Glacier.

:::

### Slide 17:

![Slide 17](slide_17.png)

::: Notes

S3 Express One Zone addresses a specific gap in the S3 storage class lineup: workloads that need object storage semantics with low latency access comparable to local NVMe storage. Standard S3 is optimized for durability and throughput, not for single-digit millisecond first-byte latency. S3 Express One Zone sacrifices cross-AZ redundancy to achieve this latency, making co-location of compute and storage in the same AZ essential. Its incompatibility with other S3 storage classes and features (Intelligent-Tiering, Lifecycle policies) means it is a purpose-specific solution, not a general-purpose S3 replacement.

#### Instructor notes (2 min)

Purpose : convey new Amazon S3 Express One
Zone information 

better supports leveraging object storage in low
latency use cases

single AZ

fast connectivity -- session based auth
token

fast object access - single-digit ms. 1st byte p99

file
system-likeness -- directory buckets

prefixes not needed -- performance
improvement

compatible with various AWS compute (EC2, ECS,
EKS)

Important: S3 Express is not compatible with other Amazon S3
storage classes.

Does not work any other S3 storage classes including
Intelligent Tiering or Lifecycle Policies.

Very specific use cases for
the low first byte latency.


S3 Lifecycle policies automate the movement of objects through storage class tiers based on age, removing the need for manual storage class management. The waterfall model for lifecycle transitions reflects S3's constraints: objects can only transition to lower-cost tiers (e.g., Standard → Standard-IA → Glacier), not back to higher-cost tiers. Objects that become frequently accessed again after being transitioned to lower tiers can be explicitly moved back, but Intelligent-Tiering handles this automatically. Lifecycle policies are the cost-efficiency engine for data with predictable aging patterns.

#### Student notes

Amazon S3 Express One Zone is a new implementation of object storage. From an architectural difference, S3 Express One Zone is implemented in a single Availability Zone. It implements a session based authentication token to allow clients to connect quickly to the storage service. To provide the lowest latency, you can co-locate your compute resources in the same Availability Zone. S3 Express One Zone supports EC2 instances, ECS containers, and EKS containers for compute. S3 Express One Zone implements directory buckets. Directory buckets provide file system like directory structure to your S3 Express One Zone buckets. This provides fast access with virtually unlimited number of objects per directory and removes the use of prefixes to improve performance. The S3 Express One Zone changes enable consistent single-digit millisecond first-byte P99 latency to improve application efficiency and performance. The access and latency improvements enable use case that require low latency and can have thousands of clients accessing a large common data pool. For more information, go to "Amazon S3 Express One Zone features": https://aws.amazon.com/s3/storage-classes/express-one-zone/ and https://aws.amazon.com/blogs/aws/new-amazon-s3-express-one-zone-high-performance-storage-class/.

:::

### Slide 18:

![Slide 18](slide_18.png)

::: Notes

S3 Lifecycle policies automate the movement of objects through storage class tiers based on age, removing the need for manual storage class management. The waterfall model for lifecycle transitions reflects S3's constraints: objects can only transition to lower-cost tiers (e.g., Standard → Standard-IA → Glacier), not back to higher-cost tiers. Objects that become frequently accessed again after being transitioned to lower tiers can be explicitly moved back, but Intelligent-Tiering handles this automatically. Lifecycle policies are the cost-efficiency engine for data with predictable aging patterns.

#### Instructor notes

#### Student notes

To manage your objects so they are

stored cost-effectively throughout their lifecycle, configure their S3
Lifecycle. An
* S3 Lifecycle configuration
* is a set of rules that define
actions that Amazon S3 applies to a set of objects. You can apply the
rules at bucket level or according to an object's tags.Amazon S3
Lifecycle rules can be scoped using prefixes and tags, allowing different data types within the same bucket to follow different lifecycle paths. For example, raw data might be archived after 30 days while processed results are kept in Standard for 90 days before deletion. The expiration action in lifecycle policies allows automatic deletion of objects after they reach end of life — the asynchronous deletion behavior means that a brief window exists where expired objects still occupy storage, but S3 stops charging for them once the expiration date passes.
supports a waterfall model for transitioning between storage classes, as
shown in the diagram. For more information, see "Transitioning Objects
Using Amazon S3 Lifecycle" in the Amazon Simple Storage Service User
Guide at
https://docs.aws.amazon.com/AmazonS3/latest/dev/lifecycle-transition-general-considerations.html.

:::

### Slide 19:

![Slide 19](slide_19.png)

::: Notes

Lifecycle rules can be scoped using prefixes and tags, allowing different data types within the same bucket to follow different lifecycle paths. For example, raw data might be archived after 30 days while processed results are kept in Standard for 90 days before deletion. The expiration action in lifecycle policies allows automatic deletion of objects after they reach end of life — the asynchronous deletion behavior means that a brief window exists where expired objects still occupy storage, but S3 stops charging for them once the expiration date passes.

#### Instructor notes

#### Student notes

You can use S3 Lifecycle policies to

define actions that you want Amazon S3 to take during an object's
lifetime. For example, you can transition objects to another storage
class, archive them, or delete them after a specified period. You can
define an S3 Lifecycle policy for all objects or for a subset of objects
by using a shared prefix (object names that begin with a common string)
or a tag. To apply this lifecycle rule to all objects with a specific
prefix, choose Limit the scope of this rule using one or more filters.
In the Prefix box, enter the prefix or tag name.For more information,
see "Managing Your Storage Lifecycle" in the Amazon Simple Storage
Service User Guide at
The three Glacier storage classes represent different points on the cost-versus-access-time spectrum for archival data. The minimum storage duration charges (90 days for Flexible Retrieval, 180 days for Deep Archive) are an important cost consideration: moving data to these classes before their minimum duration triggers a charge for the full minimum period. This makes Glacier appropriate for data with known long-term retention requirements, not for temporary storage or data with uncertain retention needs.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html.When
an object reaches the end of its lifetime, Amazon S3 queues it for
removal and removes it asynchronously. There might be a delay between
the expiration date and the date when Amazon S3 removes an object. You
are not charged for storage time associated with an object that has
expired.

:::

### Slide 20:

![Slide 20](slide_20.png)

::: Notes

The three Glacier storage classes represent different points on the cost-versus-access-time spectrum for archival data. The minimum storage duration charges (90 days for Flexible Retrieval, 180 days for Deep Archive) are an important cost consideration: moving data to these classes before their minimum duration triggers a charge for the full minimum period. This makes Glacier appropriate for data with known long-term retention requirements, not for temporary storage or data with uncertain retention needs.

#### Instructor notes

#### Student notes

The S3 Glacier Instant Retrieval, S3

Glacier Flexible Retrieval, and S3 Glacier Deep Archive storage classes
are designed for low-cost data archiving. These storage classes are
designed for different access patterns and storage duration. These
storage classes differ as follows.S3 Glacier Instant Retrieval -- Use
this storage class for archiving data that is rarely accessed and
requires retrieval in milliseconds. Data stored in the S3 Glacier
Instant Retrieval storage class offers a cost savings compared to the S3
Standard-IA storage class with the same latency and throughput
performance as the S3 Standard-IA storage class. S3 Glacier Instant
Retrieval has higher data access costs than S3 Standard-IA.S3 Glacier
Flexible Retrieval -- Use this storage class for archives where portions
of the data might need to be retrieved in minutes. Data stored in the S3
Glacier Flexible Retrieval storage class has a minimum storage duration
period of 90 days and can be accessed in as little as 1--5 minutes using
expedited retrieval. The retrieval time is flexible, and you can request
free bulk retrievals in up to 5--12 hours. If you have deleted,
overwritten, or transitioned an object to a different storage class
Restoring archived objects requires an explicit restore request and a waiting period — the restore doesn't modify the storage class; it creates a temporary, accessible copy in the original bucket. This asynchronous access model means that applications that need to retrieve archived data must handle the delay between requesting the restore and the data becoming available. For Intelligent-Tiering archive tiers, automatic re-access moves the object back to Frequent Access; for Glacier Flexible Retrieval and Deep Archive, the temporary copy expires after the specified duration and the archive remains.
before the 90-day minimum, you are charged for 90 days.S3 Glacier Deep
Archive -- Use this storage class for archiving data that rarely needs
to be accessed. Data stored in the S3 Glacier Deep Archive storage class
has a minimum storage duration period of 180 days and a default
retrieval time of 12 hours. If you have deleted, overwritten, or
transitioned an object to a different storage class before the 180-day
minimum, you are charged for 180 days.

:::

### Slide 21:

![Slide 21](slide_21.png)

::: Notes

Restoring archived objects requires an explicit restore request and a waiting period — the restore doesn't modify the storage class; it creates a temporary, accessible copy in the original bucket. This asynchronous access model means that applications that need to retrieve archived data must handle the delay between requesting the restore and the data becoming available. For Intelligent-Tiering archive tiers, automatic re-access moves the object back to Frequent Access; for Glacier Flexible Retrieval and Deep Archive, the temporary copy expires after the specified duration and the archive remains.

#### Instructor notes

#### Student notes

Amazon S3 objects in the following

storage classes or tiers are archived and are not accessible in real
time: The S3 Glacier Flexible Retrieval storage classThe S3 Glacier Deep
Archive storage classThe S3 Intelligent-Tiering Archive Access tierThe
S3 Intelligent-Tiering Deep Archive Access tierTo restore the objects,
you must do the following:For objects in the S3 Intelligent-Tiering
Archive Access and Deep Archive Access tiers, you must initiate the
restore request and wait until the object is moved into the Frequent
Access tier. If the object is not accessed after 30 consecutive days, it
automatically moves into the Infrequent Access tier. It moves into the
Archive retrieval option selection involves a trade-off between retrieval speed and cost. Expedited retrieval (1-5 minutes) is appropriate for occasional urgent accesses to archived data; Standard (3-5 hours) balances cost and wait time for planned retrievals; Bulk (5-12 hours, free) is appropriate for large-scale data recovery or migration where time is not critical. Provisioned Expedited capacity ensures that urgent retrieval requests are serviced immediately rather than queued — important for disaster recovery scenarios where Glacier data must be accessible under SLA constraints.
S3 Intelligent-Tiering Archive Access tier after a minimum of 90
consecutive days of no access. It moves into the Deep Archive Access
tier after a minimum of 180 consecutive days of no access.For objects in
the S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive storage
classes, you must initiate the restore request and wait until a
temporary copy of the object is available. Amazon S3 restores a
temporary copy of the object only for the specified duration. After that
specified time, it deletes the restored object copy.

:::

### Slide 22:

![Slide 22](slide_22.png)

::: Notes

Archive retrieval option selection involves a trade-off between retrieval speed and cost. Expedited retrieval (1-5 minutes) is appropriate for occasional urgent accesses to archived data; Standard (3-5 hours) balances cost and wait time for planned retrievals; Bulk (5-12 hours, free) is appropriate for large-scale data recovery or migration where time is not critical. Provisioned Expedited capacity ensures that urgent retrieval requests are serviced immediately rather than queued — important for disaster recovery scenarios where Glacier data must be accessible under SLA constraints.

#### Instructor notes

#### Student notes

Amazon S3 objects that are stored in

the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage
classes are not immediately accessible. To access an object in these
storage classes, you must restore a temporary copy of it to its S3
bucket for a specified duration (number of days).Restored objects from
S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive are stored only
for the number of days that you specify. If you want a permanent copy of
the object, create a copy of it in your S3 bucket.Archive retrieval
optionsYou can specify one of the following when initiating a job to
retrieve an archive based on your access time and cost
requirements:Expedited -- With Expedited retrievals, you can quickly
access your data when occasional urgent requests for a subset of
archives are required. For all but the largest archives (more than 250
MB), data accessed using expedited retrievals are typically made
available within 1--5 minutes. Two types of Expedited retrievals are
provided: On-Demand and Provisioned. On-Demand requests are similar to
Amazon EC2 On-Demand Instances and are available most of the time.
Provisioned requests are available when you need them. Standard -- With
standard retrievals, you can access any of your archives within several
hours. Standard retrievals typically complete within 3--5 hours. This is
the default option for retrieval requests that do not specify the
retrieval option. Standard retrievals initiated by using the S3 Batch
Operations restore operation typically start within minutes. They finish
within 3--5 hours for objects stored in the S3 Glacier Flexible
Retrieval storage class or S3 Intelligent-Tiering Archive Access tier.
Bulk -- Bulk retrievals are the lowest-cost retrieval option in Amazon
S3 Glacier. You can retrieve large amounts, even petabytes, of data
inexpensively in a day. Bulk retrievals typically complete within 5--12
hours.For more information about archive retrieval options, see "Archive
Retrieval Options" in the Amazon Simple Storage Service User Guide at
https://docs.aws.amazon.com/AmazonS3/latest/userguide/restoring-objects-retrieval-options.html.

:::

### Slide 23:

![Slide 23](slide_23.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 24:

![Slide 24](slide_24.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 25:

![Slide 25](slide_25.png)

::: Notes


#### Instructor notes 

#### Student notes

:::

### Slide 26:

![Slide 26](slide_26.png)

::: Notes


#### Instructor notes

#### Student notes

:::
