﻿
### Slide 1:

![Slide 1](slide_1.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 2:

![Slide 2](slide_2.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 3:

![Slide 3](slide_3.png)

::: Notes

This scenario frames the module in a realistic context: a security incident has already occurred, and the organization is responding by building a more mature detection and auditing posture. The key insight is that the incident triggers a policy and procedure review — technical controls alone are not sufficient without the processes to support them. Ask students what detection gap likely existed before the incident: was the environment unmonitored, or was monitoring in place but not acted upon?

#### Instructor notes

#### Student notes

One of the departments in Example Corp. had a security incident. Now, leadership wants to make sure that the correct policies and procedures are in place to address any future incidents. Your manager wants to set up best practices to audit the company's cloud environment. The manager is asking you for procedures to maintain a strong identity and access foundation. Requirements include the following:

* Implementing detection mechanisms that will identify configuration changes and security vulnerabilities
* Automating remediation

:::

### Slide 4:

![Slide 4](slide_4.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 5:

![Slide 5](slide_5.png)

::: Notes

IAM Access Analyzer helps you answer a question that's difficult to answer by manual policy review: which of your resources is accessible from outside your account or organization? The logic-based Zelkova engine can evaluate complex policy combinations that would take significant human effort to analyze. The 24-hour re-analysis cycle means that Access Analyzer provides near-real-time detection of new external access — though it won't catch a configuration change in the minutes immediately after it's made.

#### Instructor notes

#### Student notes

AWS Identity and Access Management Access Analyzer informs you which resources in your account you are sharing with external principals. It does this by using logic-based reasoning to analyze resource-based policies in your Amazon Web Services (AWS) environment. When you enable IAM Access Analyzer, you create an analyzer for your account. Your account is the zone of trust for the analyzer. The analyzer monitors all the supported resources within your zone of trust. Any access to resources by principals that are within your zone of trust is considered trusted. When enabled, IAM Access Analyzer analyzes the policies applied to all supported resources in your account. After the first analysis, IAM Access Analyzer analyzes these policies once every 24 hours. Supported resources include the following: Amazon Simple Storage Service (Amazon S3) buckets, AWS Key Management Service (AWS KMS) keys, IAM roles, AWS Lambda functions and layers, Amazon Simple Queue Service (Amazon SQS) queues, AWS Secrets Manager secrets. If IAM Access Analyzer identifies a policy that grants access to an external principal that is not within your zone of trust, it generates a finding. Each finding includes details so that you can take appropriate action. Details include information about the resource, the external entity that has access to it, and the permissions granted. IAM Access Analyzer is built on *Zelkova*, which translates AWS Identity and Access Management (IAM) policies into equivalent logical statements, and runs a suite of general-purpose and specialized logical solvers. IAM Access Analyzer applies Zelkova repeatedly to a policy with increasingly specific queries. It does this to characterize classes of behaviors that the policy allows, based on the content of the policy. For more information, see "Using AWS Identity and Access Management Access Analyzer" in the AWS Identity and Access Management User Guide (https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html).

:::

### Slide 6:

![Slide 6](slide_6.png)

::: Notes

The zone of trust is the fundamental concept in Access Analyzer — it defines the boundary between 'trusted' access (principals within your account or organization) and 'untrusted' access (principals outside that boundary). Choosing an organization as the zone of trust provides broader coverage than a single account, reducing the number of cross-account findings from intentional inter-account access within the organization. For multi-account environments, creating analyzers at the organization level gives operations teams a centralized view of external access.

#### Instructor notes

#### Student notes

If you're using AWS resources in multiple Regions, create an analyzer in each Region where you use AWS services. Findings that are generated by an analyzer in a Region are separate from findings from analyzers in other Regions.

**Zone of trust** : The zone of trust for the analyzer determines the set of resources that are analyzed. Access to resources from within the zone of trust is trusted. For example, if you choose an organization, access to any resources within the organization is considered trusted for any principal within the same organization. You can create only one analyzer with an account as the zone of trust.

**Tags** : You can add up to 50 tags to the analyzer for your tagging strategy.

**Getting started with IAM Access Analyzer** : The setup process for IAM Access Analyzer involves the following steps: Create an analyzer (you can set the scope for the analyzer to an organization or an AWS account — this is your zone of trust — the analyzer scans all the supported resources within your zone of trust). Review active findings (when IAM Access Analyzer finds a policy that allows access to a resource from outside of your zone of trust, it generates an active finding — findings include details about the access so that you can take action). Take action (if the access is intended, you can archive the finding so that you can focus on reviewing active findings; if the access is not intended, you can resolve the finding by modifying the policy to remove access to the resource).

:::

### Slide 7:

![Slide 7](slide_7.png)

::: Notes

Access Analyzer findings include rich detail about the resource, the external principal, and the access granted. The 'intended access' field on each finding requires a human decision: if access is intentional (such as cross-account access for a partner), archive the finding; if it's unintentional, remediate it by modifying the policy. This triage workflow means Access Analyzer is not fully automated — it requires operator judgment to distinguish legitimate access from a security risk, and findings should be reviewed regularly rather than just at setup time.

#### Instructor notes

#### Student notes

The Findings page displays the following details about the shared resource and policy statement that generated the finding:

* **Finding ID** : The unique ID assigned to the finding.
* **Resource** : The type and partial name of the resource that has a policy applied to it. The policy grants access to an external entity not within your zone of trust.
* **Resource owner account** : The account in the organization that owns the resource reported in the finding. This column is displayed only if you are using an organization as the zone of trust.
* **External principal** : The principal, not within your zone of trust, that the analyzed policy grants access to.
* **Condition** : The condition from the policy statement that grants the access. For example, if the Condition field includes Source VPC, the resource is shared with a principal that has access to the virtual private cloud (VPC) listed. Conditions can be global or service-specific. Global condition keys have the `aws:` prefix.
* **Shared through** : The setting indicates how the access that generated the finding is granted.
* **Access level** : The level of access granted to the external entity by the actions in the resource-based policy. View the details of the finding for more information.
* **Intended access** : When you get a finding for access to a resource that is intentional, you can archive the finding. When you archive a finding, it is cleared from the Active findings list, so you can focus on the findings you need to resolve.
* **Not intended** : To resolve findings generated from access that you did not intend to allow, modify the policy statement to remove the permissions that allow access to the identified resource.

For more information, see "Findings for Public and Cross-Account Access" in the AWS Identity and Access Management User Guide (https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-findings.html).

:::

### Slide 8:

![Slide 8](slide_8.png)

::: Notes

Permissions boundaries solve a practical organizational problem: how do you let developers manage their own IAM roles without risking privilege escalation? Without permissions boundaries, allowing developers to create roles effectively gives them the ability to create a role with administrator access — a privilege escalation vector. Permissions boundaries create a ceiling on the effective permissions of any role created by the delegated administrator, preventing escalation while enabling self-service IAM management.

#### Instructor notes 

#### Student notes

Developers can create the roles that

they need for their projects, but you set the maximum permissions of the
roles with a permissions boundary. You can create a role, but you must
attach a permissions boundary to it (in addition to the permissions
policy you will attach). This allows for the safe delegation of
permissions while preventing privilege escalation or broad permissions.
This also allows for multiple teams in the same account to perform
permissions management. Before permissions boundaries, many of the
actions listed here required developers or other personas to request
roles and policies for these use cases. Some companies wouldn't allow
developers to be able to create roles because of the potential for
privilege escalation. So ticketing systems or other systems were in
place, which slowed down the process. Other companies went the other
direction and gave developers these permissions and accepted the risk.

:::

### Slide 9:

![Slide 9](slide_9.png)

::: Notes

The three-party model for permissions boundaries — initial administrator, delegated administrator, and created principals — mirrors real organizational structures where a central team sets guardrails and project teams work within them. The critical constraint to emphasize is that a delegated administrator cannot create a principal with broader permissions than their own boundary allows: you cannot grant what you don't have. This property, combined with the required attachment of the permissions boundary at creation time, prevents privilege escalation even in complex delegation chains.

#### Instructor notes

#### Student notes

A permissions boundary involves three entities: Initial administrator, Delegated administrator, Objects and resources that the delegated administrator creates. The initial administrator creates the delegated administrator. This administrator attaches a policy that allows the delegated administrator to create users and roles but requires the permissions boundary to be attached. The delegated administrator then creates users and roles and specifies the permissions boundary when creating them. Otherwise, the creation fails. The result is that the delegated administrator is allowed to create users or roles. However, the effective permissions of the principals are restricted by the permissions boundary. Permissions boundaries do not actually grant permissions. Permissions boundaries restrict the permissions of the users and roles. Permissions of the roles attached to resources, such as Lambda functions, are limited by the permissions boundary.

:::

### Slide 10:

![Slide 10](slide_10.png)

::: Notes

A common misconception about permissions boundaries is that they grant permissions — they don't. They restrict them. The effective permissions of a principal are the intersection of what the identity-based policy allows and what the permissions boundary permits. This means that even if a policy allows an action, the boundary must also allow it for the request to succeed. The note about Lambda function roles is important: attaching a permissions boundary to the Lambda execution role limits what code running in that function can do, regardless of what the execution role policy says.

#### Instructor notes

#### Student notes

A permissions boundary does not grant

permissions. Instead, a permissions boundary restricts the permissions
of the users and roles attached to it. A permissions boundary also
limits the permissions of roles attached to resources, such as Lambda
functions.

:::

### Slide 11:

![Slide 11](slide_11.png)

::: Notes


#### Instructor notes

#### Student notes

This screenshot shows the IAM roles

landing page, where you can select identity-based policies with a
permissions boundary for your role.

:::

### Slide 12:

![Slide 12](slide_12.png)

::: Notes

IAM policy evaluation applies explicit denies first, then evaluates all applicable policies in combination. The key insight from this example is that the intersection of all applicable policies determines the effective permissions — an action must be allowed by every relevant policy type (SCP, permissions boundary, identity-based policy, resource-based policy) and must not be explicitly denied by any of them. Ask students to trace through the logic: if the identity-based policy allows `s3:GetObject` but the permissions boundary doesn't mention it, what happens? The request fails, because a missing allowance in the boundary is equivalent to a deny.

#### Instructor notes

#### Student notes

What happens if a user with the following policies issues an `s3:GetObject` request? Will the request be satisfied, or will it fail? Start with any explicit deny in any of the policies. Then, move on to the other policies. All the explicit denies from all the policies involved with an API call are evaluated first. Policies can include service control policies (SCPs), permissions boundaries, permissions policies, and resource-based policies. The API request from this example would fail at the permissions boundary. Even though the identity-based policy allows the user to issue GetObject, the permissions boundary does not mention it. For more information, see "Policy Evaluation Logic" in the AWS Identity and Access Management User Guide (https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html).

:::

### Slide 13:

![Slide 13](slide_13.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 14:

![Slide 14](slide_14.png)

::: Notes

CloudTrail, AWS Config, GuardDuty, and Inspector form complementary layers of detection: CloudTrail records who did what, Config records what changed, GuardDuty detects behavioral threats, and Inspector assesses known vulnerabilities. Each service answers a different question and covers a different type of risk. Using all four together provides defense-in-depth in the detection layer — no single service covers all threats, and gaps in any one layer are filled by the others.

#### Instructor notes

#### Student notes

A foundational practice is to establish a set of detection mechanisms at the account level. This base set of mechanisms is aimed at recording and detecting a wide range of actions on all resources in your account. Using these mechanisms, you can build out a comprehensive detective capability with options that include automated remediation, and Partner integrations to add functionality.

* **AWS CloudTrail** : Provides event history of your AWS account activity. Activity can include actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services.
* **AWS Config** : Monitors and records your AWS resource configurations. You can automate the evaluation and remediation against desired configurations.
* **Amazon GuardDuty** : A threat-detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts and workloads.
* **Amazon Inspector** : Can perform configuration assessments against your instances for known common vulnerabilities and exposures (CVEs), assess against security benchmarks, and fully automate the notification of defects.

:::

### Slide 15:

![Slide 15](slide_15.png)

::: Notes

CloudTrail creates an audit log of every AWS API call, which is foundational for security accountability: if something changes in your environment, CloudTrail tells you who changed it, when, and from where. Because the AWS Management Console and AWS CLI both use the underlying API, CloudTrail captures actions taken through any interface. The most important operational practice is ensuring CloudTrail is enabled in all regions and that the log files are sent to a protected S3 bucket that cannot be modified or deleted by ordinary account users.

#### Instructor notes

#### Student notes

AWS CloudTrail is an AWS service that generates logs of AWS API calls. Because the AWS API underlies both the AWS Command Line Interface (AWS CLI) and the AWS Management Console, CloudTrail can record the interactions between your account's users and resources.

:::

### Slide 16:

![Slide 16](slide_16.png)

::: Notes

CloudTrail logs answer questions that are critical for both security investigations and operational troubleshooting. The three examples here — who terminated an instance, who changed a security group, and what was denied — represent categories of questions that arise regularly in production environments. Access denials are particularly interesting: a pattern of denied API calls from a principal or IP that shouldn't be attempting those actions is an indicator of unauthorized activity that warrants investigation.

#### Instructor notes

#### Student notes

Using CloudTrail, you can store API activity logs in an Amazon S3 bucket. You can analyze those logs later to answer compelling questions such as the following:

* Why was a long-running instance terminated, and who terminated it? (Organizational traceability and accountability)
* Who changed a security group configuration? (Accountability and security auditing)
* What activities were denied because of inadequate permissions? (Potential internal or external attack against the network)

:::

### Slide 17:

![Slide 17](slide_17.png)

::: Notes

Configuring a CloudTrail trail involves deciding which events to capture (management events, data events, or both), where to store the logs, and how to protect them. Data events such as S3 object-level access can generate very high volumes of log data and incur significant cost — they should be enabled selectively for the buckets that matter most for security or compliance purposes. Log file integrity validation is a critical security control: it lets you verify that log files have not been tampered with after delivery.

#### Instructor notes

#### Student notes

You can configure the following settings when you create or update a trail from the CloudTrail console or the AWS CLI. Both methods follow the same steps. For more information, see "Creating a trail" in the AWS CloudTrail User Guide (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-create-a-trail-using-the-console-first-time.html).

:::

### Slide 18:

![Slide 18](slide_18.png)

::: Notes

CloudTrail log entries are JSON-formatted records that capture the full context of each API call: the principal, the source IP, the user agent, the target resource, and the outcome. This structure makes CloudTrail logs queryable with tools like Amazon Athena, which is covered later in this module. The user agent field is particularly useful for identifying whether an action was taken through the console, the CLI, or a specific SDK — which can help distinguish between human actions and automated processes.

#### Instructor notes

#### Student notes

The text in this example shows the head

of a typical CloudTrail log, which is a login request. The request fully
identifies the user, the time of the event, the source of the request,
user agent string, and the event that occurred (ConsoleLogin).

:::

### Slide 19:

![Slide 19](slide_19.png)

::: Notes

The successful login entry shows the outcome field in the CloudTrail log structure. The combination of slides 18 and 19 illustrates a complete CloudTrail event — from the request fields identifying the source and actor to the response fields showing the outcome. When investigating a security incident, you would look at both the event that succeeded and any preceding failed attempts, because repeated authentication failures followed by a success can indicate a brute-force attack.

#### Instructor notes

#### Student notes

The code example here is a continuation of the previous event and displays a successful outcome. That is, the user was able to log in successfully.

:::

### Slide 20:

![Slide 20](slide_20.png)

::: Notes

Amazon Athena enables SQL-based analysis of CloudTrail logs stored in S3 without requiring you to move the data to a database or build a data pipeline. This schema-on-read approach means you can query existing log archives immediately, which is particularly valuable for incident investigations where you need to answer specific questions quickly. The combination of CloudTrail for collection and Athena for analysis forms a cost-effective security investigation platform that scales with log volume.

#### Instructor notes

#### Student notes

Data investigation and research is a key phase in designing effective and efficient processing. For data stored in Amazon S3, Amazon Athena is an interactive query service that helps you to analyze data using standard structured query language (SQL). Athena is serverless, so you have no infrastructure to set up or manage. You can start analyzing data immediately. To get started, sign in to the Athena console, define your schema, and start querying. You can also access data from more than one Region in a single query. Athena uses *Presto*, a distributed SQL engine, to run queries. It also uses *Apache Hive* to create, drop, and alter tables and partitions. You can write Hive-compliant data definition language (DDL) statements and American National Standards Institute (ANSI) SQL statements in the Athena query editor. You can also use complex joins, window functions, and complex data types on Athena. Athena uses an approach known as schema-on-read that you can use to project your schema onto your data at the time you run a query. Schema-on-read reduces the need for any data loading or extract, transform, and load (ETL). For more information, see "Analyzing Data in S3 Using Amazon Athena" on the AWS Big Data Blog (https://aws.amazon.com/blogs/big-data/analyzing-data-in-s3-using-amazon-athena/).

:::

### Slide 21:

![Slide 21](slide_21.png)

::: Notes

Athena uses a table definition to impose structure on CloudTrail log files that are stored as raw JSON in S3. The table definition doesn't move or transform the data — it's a metadata layer that tells Athena how to read it. Supporting multiple log formats (CloudTrail, VPC Flow Logs, CloudFront access logs) from the same Athena service means you can join different log types in a single query, enabling cross-service correlation that would be difficult with separate analysis tools.

#### Instructor notes

#### Student notes

In Athena, tables and databases are containers for the metadata definitions. The metadata defines a schema for the underlying source data. For each Amazon S3 dataset that you select to query, you must create a table in Athena. The metadata in the table specifies where the data is located in Amazon S3. It also specifies the structure of the data---for example, column names and data types. The process of creating a table is what registers the dataset with Athena. After the table is created (automatically from the AWS Management Console or manually), you can then query the table. Athena contains a Query Editor to write and run queries. Athena supports creating tables and querying data from files in comma-separated values (CSV), tab-separated values (TSV), custom-delimited, and JSON formats. Files from Hadoop-related formats are also supported, such as: Optimized Row Columnar (ORC), Apache Avro and Parquet, Logstash logs, CloudTrail logs, Apache WebServer logs. Properly configure Athena according to the format being queried. This way, Athena knows which format is used and how to parse the data. For more information, see "Supported SerDes and Data Formats" in the Amazon Athena User Guide (https://docs.aws.amazon.com/athena/latest/ug/supported-serdes.html).

:::

### Slide 22:

![Slide 22](slide_22.png)

::: Notes

AWS Config provides continuous configuration monitoring: it records the state of your resources and detects when that state changes. Unlike CloudTrail, which records API calls, Config tracks resource state — what the configuration of a resource actually is at any point in time. This distinction matters for audit and compliance: CloudTrail tells you what changed and who changed it, while Config tells you whether the current configuration meets your requirements. The two services complement each other for a complete audit trail.

#### Instructor notes

#### Student notes

AWS Config is a continuous monitoring and assessment service that provides you with an inventory of your AWS resources and records changes to the configuration of those resources. You can view the current and historic configurations of a resource and use this information to troubleshoot outages and conduct analyses of security attack. You can view the configuration at any time. Use that information to reconfigure your resources and bring them into a steady state during an outage situation. AWS resources are entities that you create and manage by using the AWS Management Console, the AWS CLI, the AWS SDKs, or AWS Partner tools. Examples of AWS resources include the following: Amazon EC2 instances, Security groups, Amazon Redshift clusters, Amazon Virtual Private Cloud (Amazon VPC) components, Amazon Elastic Block Store (Amazon EBS). AWS Config refers to each resource using its unique identifier, such as the resource ID or an Amazon Resource Name (ARN). AWS Config also integrates with AWS Systems Manager to provide continuous monitoring and governance of software on your EC2 instances. It also monitors on-premises systems, such as your virtual machines (VMs). AWS Config also includes multi-account, multi-Region data aggregation to provide centralized auditing and governance. This feature reduces the time and overhead needed to gather an enterprise-wide view of your compliance status. For more information, see "Multi-Account Multi-Region Data Aggregation" (https://docs.aws.amazon.com/config/latest/developerguide/aggregate-data.html) and "Supported Resource Types" (http://docs.aws.amazon.com/config/latest/developerguide/resource-config-reference.html) in the AWS Config Developer Guide.

:::

### Slide 23:

![Slide 23](slide_23.png)

::: Notes

AWS Config rules translate your compliance requirements into automated evaluations that run against every relevant resource in your account. Managed rules provide a library of common checks (EBS encryption, VPC membership, SSH access restrictions) that require minimal configuration; custom rules backed by Lambda let you enforce policies that are specific to your organization. The real power is that Config rules run continuously — so a resource that was compliant yesterday but had its configuration changed will be detected and flagged automatically.

#### Instructor notes

#### Student notes

Using AWS Config rules, you can run continuous assessment checks on your resources to verify that they comply with your own security policies, industry best practices, and compliance standards.

**Managed rules** : AWS provides several managed, prebuilt rules that require minimal to no configuration. For example, AWS Config provides a rule to ensure that encryption is turned on for all Amazon EBS volumes in your account. Examples of managed rules include the following: Are all attached EBS volumes encrypted? Do all EC2 instances belong to a VPC? Are all resources tagged as expected? Do all security groups in use deny unregulated, incoming Secure Shell (SSH) traffic? Is CloudTrail enabled in your AWS account? Does the password policy for IAM users meet specified requirements?

**Custom rules** : You can also write a custom AWS Config rule to codify your own corporate security policies. Custom rules are associated with a Lambda function that you create and maintain. When you invoke a custom rule, it runs the associated Lambda function. AWS Config alerts you in real time when a resource is misconfigured or when a resource violates a particular security policy.

For more information, see "AWS Config Managed Rules" in the AWS Config Developer Guide (http://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_use-managed-rules.html).

:::

### Slide 24:

![Slide 24](slide_24.png)

::: Notes


#### Instructor notes

#### Student notes

The Resource inventory user interface

contains the results from the AWS Config rules that you have set up. You
can view results by resource type, tag, or compliance status. Expanding
one of the entries displays the reason that the specific resource type
is compliant or noncompliant. Selecting the resource link provides
detailed information for the resource, such as relationships, any past
configuration changes, ARN, and CloudTrail events.

:::

### Slide 25:

![Slide 25](slide_25.png)

::: Notes

Amazon GuardDuty shifts threat detection from reactive log analysis to continuous, ML-driven monitoring. Rather than requiring you to define rules for every possible threat pattern, GuardDuty uses managed threat intelligence feeds and behavioral models that AWS maintains and updates. The finding delivery to EventBridge makes GuardDuty's detections actionable in automated workflows — a finding about a compromised instance can automatically trigger isolation or remediation without human intervention.

#### Instructor notes

#### Student notes

Amazon GuardDuty is an intelligent threat detection service that provides you with a way to continuously monitor and protect your AWS accounts and workloads. GuardDuty identifies suspected unauthorized users through integrated threat intelligence feeds. It uses machine learning to detect anomalies in account and workload activity. GuardDuty monitors for activity such as unusual API calls or unauthorized deployments that indicate a customer's accounts might have been compromised. It also monitors direct threats, such as compromised instances or reconnaissance by unauthorized users or third parties. When GuardDuty detects a potential security risk, the service delivers a detailed security alert to the GuardDuty console and Amazon EventBridge. The delivery of this information makes alerts actionable and capable of integrating into existing event management and workflow systems. To learn more Amazon GuardDuty, see Continuous Monitoring and Threat Detection at https://aws.amazon.com/security/continuous-monitoring-threat-detection/.

:::

### Slide 26:

![Slide 26](slide_26.png)

::: Notes

GuardDuty analyzes CloudTrail events, VPC Flow Logs, and DNS logs from the moment it's enabled, without requiring you to configure collection pipelines or write detection rules. AWS manages the underlying threat intelligence and detection logic, which means GuardDuty stays current with emerging threats even without operator involvement. The key operational consideration is how you respond to findings: integrating GuardDuty with EventBridge and defining response workflows before an incident occurs is far more effective than deciding what to do after a high-severity finding appears.

#### Instructor notes

#### Student notes

You can turn on GuardDuty in the AWS Management Console with a few steps. When enabled, the service immediately starts analyzing events from CloudTrail, Amazon VPC Flow Logs, and Domain Name System (DNS) logs. AWS Security creates, maintains, and updates the detections, rule sets, and threat intelligence, so that you do not have to write rules or detection logic. When GuardDuty detects a potential threat, it delivers a detailed security finding to the GuardDuty console and EventBridge. This makes alerts actionable and easy to integrate into existing event management or workflow systems. The findings include the category, resource affected, and metadata associated with the resource. Metadata might include tags, a severity level, an explanation of the finding, and suggested remediation path. GuardDuty consumes feeds from various sources, such as the following: AWS (CloudTrail Events, Amazon VPC Flow Logs, and DNS Logs), Third-party security providers, Open-source feeds, Customer-provided threat intelligence.

:::

### Slide 27:

![Slide 27](slide_27.png)

::: Notes

GuardDuty findings include structured metadata about the threat: what happened, what resources were involved, and the severity. The severity-based prioritization (high, medium, low) helps operators focus response effort on the most critical issues first. The shared JSON format between GuardDuty, Macie, and Inspector enables consolidated security workflows — you can build a single event-handling pipeline that processes findings from all three services. Ask students: what organizational process change is needed to make the difference between high and medium severity findings operationally meaningful?

#### Instructor notes

#### Student notes

When GuardDuty detects a threat, the service delivers a detailed security finding to the GuardDuty console and EventBridge. The findings details include the following: Information about what happened, Which AWS resources were involved in suspicious activity, When and where the activity took place, Who was the actor. GuardDuty findings come in a common JSON format that is also used by Amazon Macie and Amazon Inspector. By using JSON format, customers and Partners can consume security findings from all three services and incorporate them into broader event management, workflow, or security solutions. You can view and manage the findings through the Findings page in the web console. You can also use the AWS CLI or API operations (`ListFindings`). Severity levels indicate a security issue that can result in compromised information confidentiality, integrity, and availability within your AWS infrastructure. GuardDuty findings have an assigned severity level of high, medium, or low. A high severity level means that you treat the security issue as a priority and take immediate remediation steps. A medium level calls for an investigation of the implicated resource at your earliest convenience. A low severity level indicates that there is no immediate recommended action. However, it is worth noting this information as something to address in the future. With GuardDuty, you can set up automatic archiving when creating a findings filter. This is useful when you have the following situations: A unique use case in your environment generates many similar findings. You have reviewed a certain class of findings and don't want to be alerted again. For information, see "Remediating Security Issues Discovered by GuardDuty" in the Amazon GuardDuty User Guide (https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_remediate.html).

:::

### Slide 28:

![Slide 28](slide_28.png)

::: Notes

Automated response to GuardDuty findings — such as automatically modifying firewall rules when a known-malicious IP is detected — dramatically reduces the time between detection and remediation. However, automated network isolation or access changes carry operational risk: an incorrect finding or a misconfigured Lambda function can inadvertently disrupt legitimate services. Automated responses should be designed with fallback mechanisms and should initially run in 'alert only' mode until confidence in the finding quality and remediation logic is established.

#### Instructor notes

#### Student notes

With GuardDuty, EventBridge, and

Lambda, you have the flexibility to set up automated preventative
actions based on a security finding. For example, you can create a
Lambda function to modify your firewall or network access control list
(network ACL) rules based on security findings. If a GuardDuty finding
indicates that one of your EC2 instances is being probed by a known
malicious IP, you can address it through a EventBridge rule. The rule
launches a Lambda function to automatically modify your firewall or
network ACL rules and restrict access on that port.

:::

### Slide 29:

![Slide 29](slide_29.png)

::: Notes

Amazon Inspector automates vulnerability assessment against known CVEs and security benchmarks, providing a systematic alternative to manual security reviews. The agent-based approach means Inspector sees what's actually installed on the instance via the package manager, not just what the AMI was originally built from — any packages added after launch are assessed. The agentless Network Reachability rules package provides a complementary view: it identifies which ports and services are accessible from outside the VPC without requiring agent installation on every instance.

#### Instructor notes

#### Student notes

Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for vulnerabilities or deviations from best practices. After performing an assessment, Amazon Inspector produces a detailed list of security findings ranked by level of severity. You can review these findings directly or as part of detailed assessment reports that are available from the Amazon Inspector console or using API operations. Because Amazon Inspector is an agent-based service, you can deploy the Amazon Inspector agent on the EC2 instances running the applications that you want to assess. Amazon Inspector also offers agentless network assessments with the Network Reachability rules package. Network Reachability identifies ports and services on your EC2 instances that are accessible from outside your VPC.

:::

### Slide 30:

![Slide 30](slide_30.png)

::: Notes

Inspector's scope is limited by its installation method detection: software installed via APT, YUM, or MSI is assessed, but software installed from source or binary copies is invisible to it. This is a significant coverage gap in environments that rely on manual or non-standard installation methods. The two report types — findings report (failures only) and full report (passes and failures) — serve different audiences: the findings report is action-oriented for remediation teams; the full report provides compliance evidence that all rules were evaluated.

#### Instructor notes

#### Student notes

Amazon Inspector finds applications by querying the package manager or software installation system on the operating system where the agent is installed. This means that software that was installed through the package manager is assessed for vulnerabilities.

Note: Limitations and restriction on methods of installation recognized by Amazon Inspector include the following: Amazon Inspector assesses software installed through APT, YUM, or Microsoft Installer. Amazon Inspector does not assess software installed through `make config`/`make install` or binary files copied directly to the system using automation software, such as Puppet or Ansible.

**Reports** : An assessment report is a document that details what is tested in the assessment run, and the results of the assessment. The results of your assessment are formatted into standard reports. You can generate the
reports to share results within your team for remediation actions, to
enrich compliance audit data, or to store for future reference. You can
generate an Amazon Inspector assessment report for an assessment run
when it has been successfully completed. You can select from two types
of report for your assessment: a findings report or a full report. The
findings report contains a summary of the following:AssessmentInstances
targetedRules packages testedRules that generated findingsDetailed
information about each of these rules along with the list of instances
that failed the checkThe full report contains all the information in the
findings report. The report also provides a list of rules that passed on
all instances in the assessment target.

:::

### Slide 31:

![Slide 31](slide_31.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 32:

![Slide 32](slide_32.png)

::: Notes

Detection controls are only as valuable as your response to what they detect. Incident response preparation — runbooks, team training, access provisioning, and simulation exercises — determines how quickly you can contain an incident and restore operations. A critical cloud-specific consideration is that responders need the right IAM permissions to take response actions before an incident occurs, not after — discovering that your incident response team can't access the affected resources during an active incident is a preparation failure. Simulation exercises help surface these gaps safely.

#### Instructor notes

#### Student notes

Even with extremely mature preventive

and detective controls, your company should still implement mechanisms
to respond to and mitigate the potential impact of security incidents.
Your preparation strongly affects your team's ability to operate
effectively during an incident, to isolate and contain issues, and to
restore operations to a known good state.Educate your security
operations and incident response staff about cloud technologies and how
your organization intends to use them. Prepare your incident response
team to detect and respond to incidents in the cloud, enable detective
capabilities, and ensure that appropriate access is given to the
necessary tools and cloud services. Additionally, prepare the necessary
runbooks, both manual and automated, to ensure reliable and consistent
responses. Work with other teams to establish expected baseline
operations, and use that knowledge to identify deviations from those
normal operations. An Automation runbook defines the actions that
Systems Manager performs on your managed instances and other AWS
resources when an automation runs. Automation is a capability of AWS
Systems Manager.Simulate both expected and unexpected security events in
your cloud environment to understand the effectiveness of your
preparation. Iterate on the outcome of your simulation to improve the
scale of your response posture, reduce time to value, and further reduce
risk. The focus of this section is on automating remediation. For more
information, see "Incident Response" in Security Pillar of the AWS
Well-Architected Framework
(https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/incident-response.html).

:::

### Slide 33:

![Slide 33](slide_33.png)

::: Notes

Playbooks translate your incident response plan into structured, executable steps that can be followed consistently under pressure. AWS Config's integration with Systems Manager Automation, EventBridge, and Lambda enables automated playbook execution — when Config detects a compliance violation, it can automatically trigger the remediation steps without requiring manual intervention. Starting with likely scenarios means building playbooks for the events you actually see in your environment, not hypothetical threats.

#### Instructor notes

#### Student notes

Create incident response plans as

playbooks, starting with the most likely scenarios for your workload and
company. These might be events that are currently generated. AWS Config
detects a change in compliance, and it can use Automation (predefined
and custom) to remediate. If you need to use a Lambda function to
remediate, AWS Config can initiate a Amazon EventBridge event, and the
EventBridge event can launch a Lambda function. Use EventBridge events
to initiate remediation in the form of Automation, Lambda, and several
other target functions.

:::

### Slide 34:

![Slide 34](slide_34.png)

::: Notes

AWS Config's automated remediation closes the loop between detection and correction: a noncompliant resource is detected, and the remediation action is applied automatically using Systems Manager Automation documents. This eliminates the manual workflow of reviewing Config findings and then separately taking corrective action. The tradeoff is risk: automated remediation that modifies production resources must be carefully designed and tested, because an incorrectly configured remediation action can cause unintended changes at scale.

#### Instructor notes

#### Student notes

Use AWS Config to remediate

noncompliant resources that are evaluated by AWS Config rules. AWS
Config applies remediation by using Automation documents. These
documents define the actions to be performed on noncompliant AWS
resources evaluated by AWS Config rules.For more information, see
"Remediating Noncompliant Resources with AWS Config Rules" in the AWS
Config Developer Guide
(https://docs.aws.amazon.com/config/latest/developerguide/remediation.html).

:::

### Slide 35:

![Slide 35](slide_35.png)

::: Notes

AWS Config provides a library of managed Automation documents for common remediation actions, such as enabling S3 encryption or restricting public access. For organization-specific requirements, custom Automation documents backed by Systems Manager enable arbitrary remediation logic. The choice between manual and automatic remediation is a risk-versus-speed decision: automatic remediation responds instantly and at scale, but requires higher confidence in the rule and remediation logic; manual remediation gives operators control but introduces latency.

#### Instructor notes

#### Student notes

AWS Config provides a set of managed

Automation documents with remediation actions. You can also create and
associate custom Automation documents with AWS Config rules. Predefined
or custom Automation documentsTo apply remediation on noncompliant
resources, you can choose the remediation action from a prepopulated
list, or create your own custom remediation actions using Systems
Manager documents. AWS Config provides a recommended list of remediation
actions in the AWS Config Console. Manual or automatic remediationIn the
AWS Management Console, you can choose to manually or automatically
remediate noncompliant resources by associating remediation actions with
AWS Config rules. With all remediation actions, you can choose manual or
automatic remediation.

:::

### Slide 36:

![Slide 36](slide_36.png)

::: Notes

AWS Lambda provides the execution environment for custom remediation logic — when managed Automation documents don't cover your specific remediation requirement, Lambda functions can implement any corrective action that's callable via an AWS API. Lambda's stateless, event-driven model maps well to remediation use cases: each invocation handles a single compliance event independently. The key security consideration is the Lambda execution role: it must have exactly the permissions needed for the remediation action, following least-privilege principles.

#### Instructor notes

#### Student notes

With Lambda, you can run code without

provisioning or managing servers. You can run code for virtually any
type of application or backend service, all with no administration. You
can set up your code to be automatically invoked from other AWS services
or call it directly from any web or mobile app.The code that you run on
Lambda is called a Lambda function. Functions are modular. Instead of
having one function called file processor that does compression, thumb
nailing, and indexing, consider having three different functions, where
each function does one task. After you create your Lambda function, it
is always ready to run as soon as it is initiated.Lambda functions are
stateless, with no affinity to the underlying infrastructure. This means
that Lambda can rapidly launch as many copies of the function as needed
to scale to the rate of incoming events. After you upload your code to
Lambda, you can associate your function with specific AWS resources (for
example, a particular Amazon SNS notification). Then, when the resource
changes, Lambda runs your function and manages the compute resources as
needed to keep up with incoming requests. If you need to store secrets
to access external services, you can use the AWS Key Management Service
(AWS KMS) to store and retrieve the secrets within your Lambda function.

:::

### Slide 37:

![Slide 37](slide_37.png)

::: Notes

Lambda accesses other AWS services through the IAM execution role it assumes at runtime, which means the security of automated remediation depends on designing that role with appropriate scope. Running Lambda within a VPC allows it to access resources in private subnets, such as RDS databases or internal APIs, but introduces additional networking complexity and potential latency. The compliance certifications listed are relevant for organizations operating in regulated industries where the remediation infrastructure itself must meet compliance requirements.

#### Instructor notes

#### Student notes

Lambda allows your code to securely

access other AWS services through its built-in AWS SDK and integration
with
* IAM. You grant permissions to your Lambda function to access other
resources using an IAM role. Lambda assumes the role while running your
Lambda function, so you always retain full and secure control of exactly
which AWS resources it can use. Lambda runs your code within a VPC by
default. You can also configure Lambda to access resources behind your
own VPC. You can use custom security groups and network ACLs to provide
your Lambda functions access to your resources within a VPC. Lambda
is
* compliant with Security Operations Center (SOC),
* Health Insurance
Portability and Accountability Act (HIPAA),
* Payment Card Industry
(PCI),
* and International Organization for Standardization (ISO).

:::

### Slide 38:

![Slide 38](slide_38.png)

::: Notes

Event-driven remediation creates a closed-loop security system where detection automatically triggers correction — in this example, disabling CloudTrail triggers its own re-enablement and an audit notification. This pattern is powerful because it removes the human latency from the response loop, but it requires careful design: the Lambda function must not be disableable by the same actor who disabled CloudTrail, otherwise the remediation itself can be circumvented. Ask students how they would protect the Lambda function and EventBridge rule from the same unauthorized actor.

#### Instructor notes

#### Student notes

With an event-driven response system, a

detective mechanism invokes a responsive mechanism to automatically
remediate the event. You can use event-driven response capabilities to
reduce the time-to-value between detective mechanisms and responsive
mechanisms. To create this event-driven architecture, you can use
Lambda. For example, assume that you have an AWS account with the
CloudTrail service enabled. If CloudTrail is ever disabled, the response
procedure is to enable the service again and investigate the user who
disabled the CloudTrail logging. You can use Eventbridge to monitor for
the specific cloudtrail:StopLogging event and run the function if it
occurs. When EventBridge invokes this Lambda function, the function
collects the following details of the specific event: Information of the
principal who disabled CloudTrailWhen it was disabledSpecific resource
that was affectedOther relevant informationThis processed information
could then be sent as an Amazon SNS notification. You can use this
information to essentially perform a log dive and then generate a
notification or alert with only the specific values that a response
analyst would require. Another Lambda function can also be invoked to
automatically start the logging.

:::

### Slide 39:

![Slide 39](slide_39.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 40:

![Slide 40](slide_40.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 41:

![Slide 41](slide_41.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 42:

![Slide 42](slide_42.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 43:

![Slide 43](slide_43.png)

::: Notes

This troubleshooting scenario illustrates a common real-world situation: automated remediation ran but failed, and the details page doesn't explain why. The answer requires knowing where remediation execution details live — in the Systems Manager Automation console, not in the Config console. This is a pattern worth remembering: AWS services often delegate execution to another service, and troubleshooting requires following the execution chain to find where the failure actually occurred.

#### Instructor notes

#### Student notes

You have configured the AWS Config

rule, s3-bucket-server-side-encryption-enabled, to detect buckets with
encryption disabled. You configured the AWS Config rule with automatic
remediation using the Automation document AWS-EnableS3BucketEncryption.
Your bucket, mytestbucket-2020090210am, is marked as noncompliant, and
the remediation action has generated an error. The details page does not
include all information on the cause of the error. Where do you find
more information? Because Automation manages remediation, you will find
more details on the processed Automation documents in the Automation
console.

:::

### Slide 44:

![Slide 44](slide_44.png)

::: Notes


#### Instructor notes

#### Student notes

Because the remediation is an

Automation document, you can find more information about enacted
documents in the Automation console. You will find a list of processed
automations. Find the Automation document in the list, and open it for
more details. In this example, the Execution detail steps have an
AccessDenied error. The following are the most likely causes for the
errors:S3 Encryption permissions missingiam::PassRole policy is missing

:::

### Slide 45:

![Slide 45](slide_45.png)

::: Notes


#### Instructor notes

#### Student notes

You can use the AWS CLI to find

detailed information about AWS Config remediation errors.For more
information, see "How Can I Troubleshoot Failed Remediation Actions in
AWS Config?" in the AWS Knowledge Center
(https://aws.amazon.com/premiumsupport/knowledge-center/config-remediation-executions/).

:::

### Slide 46:

![Slide 46](slide_46.png)

::: Notes


#### Instructor notes

#### Student notes

This example shows the output of the

error.

:::

### Slide 47:

![Slide 47](slide_47.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 48:

![Slide 48](slide_48.png)

::: Notes


#### Instructor notes

#### Student notes

:::

### Slide 49:

![Slide 49](slide_49.png)

::: Notes


#### Instructor notes

#### Student notes

This example shows the output of the

error.

:::
