Introduction to AWS
Amazon Web Services (AWS)
- Amazon: Part of the Big Tech companies and a member of FAANG
- Web: Accessible over standard web protocols via the internet
- Services: Continuous offerings that customers can utilize for a specified period, typically following a pay-as-you-go model (pay only for what you use)
- Example: An Internet Service Provider (ISP) delivers internet connectivity as a service. You pay a monthly fee for access at a specified speed. You can start or stop the service contract, paying only for the duration of usage.
- Other examples of services include Netflix subscriptions, utilities like water or heating, and electric vehicle charging stations.
- Service vs. Product: A service is consumed over time (e.g., HelloFresh delivers meals regularly for as long as you subscribe), whereas a product is a one-time purchase you own outright (e.g., a lunchbox from a supermarket).
AWS Cloud Use Cases
- AWS enables the development and hosting of sophisticated, scalable software applications across any industry
- Applications can optionally have a global reach
- Common use cases include:
- Hosting websites or web applications
- Cloud-based data storage and backup
- Big data analytics and processing
- Hosting gaming servers
- …and many more
AWS Cloud Pricing Model
- Primarily a Pay-as-You-Go model
- Discounts may be available when reserving capacity or committing to long-term usage
- This approach addresses the high costs associated with traditional IT infrastructure
- AWS pricing is based on three fundamental components:
- Compute: Pay only for the compute time used
- Storage: Pay for the data stored within AWS
- Network:
- Data transferred into AWS is free
- Data transferred out of AWS incurs charges
AWS Responsibility Model and Usage Policy
AWS Shared Responsibility Model for Security
- Official AWS reference:

- AWS employs this model to clearly define security responsibilities between the provider and the customer:
- Security OF the cloud: Managed by AWS
- Security IN the cloud: Managed by the customer
- The concept resembles the division of responsibilities in the cloud infrastructure stack (e.g., IaaS, PaaS), where responsibilities are shared between vendor and customer depending on the service model. Key distinctions for the Shared Responsibility Model:
- Applied across the entire cloud environment
- Focused exclusively on security
- Practical tip: Keep this model visible (e.g., printed out nearby) as a reference while learning AWS security responsibilities
AWS Acceptable Use Policy (AUP)
- Official AWS reference: AWS Acceptable Use Policy
- While largely straightforward, AWS clearly outlines the rules:
- No illegal, harmful, or offensive use or content
- No security violations
- No network abuse
- No e-mail or other messaging abuse
Introduction to AWS Accounts
AWS Account – Key Concepts
- AWS account: A container for identities and AWS resources
- Note: An AWS account is not the same as a human user within the account.
- Identity: A user, application, or entity capable of logging in to an AWS account
- Exception: IAM groups (explained later)
- AWS resource: Any software, hardware, or data that operates in or is stored within the AWS cloud and is associated with an AWS account
- Examples: A virtual server (EC2 instance), an S3 bucket containing files or images
- According to AWS: a resource is “an entity that you can work with.”
- Resources are created inside AWS services
- Example: S3 is a service; an S3 bucket is a resource created within S3
- For simple systems, a single AWS account may suffice. However, complex systems typically span multiple accounts, with tools like AWS Organizations helping manage them
- Best practice: AWS accounts should be disposable. Avoid placing all business operations in a single account.

- When setting up an AWS account, the following information is required:
- Account Name
- Example:
mywebapp-PROD
- Example:
- Unique Email Address
- Used to create the root user
- Must be unique for each account; sharing is not allowed
- Gmail trick: Use
+to create unique addresses that redirect to the same inbox- Example:
user@gmail.comanduser+awsaccount@gmail.comare considered unique by AWS but both deliver touser@gmail.com
- Example:
- Credit Card
- Required as the account’s payment method
- Can be shared across multiple accounts
- AWS operates on a pay-as-you-go/pay-as-you-consume model
- Charges are applied as services are used
- Free tier: Certain services include limited free usage each month, which is ideal for minimizing costs while learning AWS
AWS Account Root User
- Root user: The default identity of an AWS account
- Possesses full access and control over the account, with no restrictions
- The first and initially only identity in an account
- The root user can be loosely considered synonymous with the AWS account itself
- Handle with extreme caution! Compromise of root user credentials jeopardizes the entire account
- Recommended practice: Use the root user only for initial setup, emergency tasks, and account closure. Create a separate administrative identity (e.g.,
iamadmin) for routine administrative activities
- Recommended practice: Use the root user only for initial setup, emergency tasks, and account closure. Create a separate administrative identity (e.g.,
IAM – Identity and Access Management
- IAM: AWS service for creating additional account identities that can be permission-restricted
- IAM identities include users, groups, and roles
- By default, IAM identities have no permissions (except the root user)
- Permissions can be granted fully or partially to access specific services and resources within the account
AWS Account Boundaries
- Account boundary: All resources within an account are isolated from external access by default
- External access must be explicitly granted
- This isolation helps contain potential risks, such as administrative errors or security breaches
- Single-account usage is risky. Multiple accounts limit potential damage
- Recommended practice: Use separate accounts for different environments (DEV, TEST, PROD)
- Consider separate accounts for different teams, products, or clients
Free and Paid AWS Accounts
- Historically, AWS accounts did not distinguish between free or paid accounts; some services offered a free usage tier for 12 months
- Since 2025, AWS offers Free accounts for new customers: AWS Free Tier
- No billing for the first 6 months
- Up to $200 in AWS credits for usage
- Once 6 months or credit limit is reached, workloads stop until upgrading to a paid plan
- Not all AWS services and features are available in Free accounts

- The eligibility requirements for obtaining a Free AWS account are very strict:
- Each customer is allowed to use a Free account only once
- Customer credentials—including name, credit card, billing address, email, and other personal information—cannot be shared with any other account, even if other accounts are closed or inactive
- Note: The Gmail
+trick does not work for Free accounts; it only applies to Paid accounts
- Note: The Gmail
- If AWS determines that a customer is attempting to use a second Free account, they may deny eligibility for additional Free accounts and may suspend any existing Free accounts associated with that customer
- Recommendation: Consider using a Paid AWS account from the beginning
- A credit card is still required for Free accounts, so starting with a Paid account simplifies setup
- Paid accounts still allow you to take advantage of the $200 in AWS credits
- Workloads in Paid accounts will not be stopped unexpectedly
- Paid accounts provide access to all AWS services and features
- There is no time pressure, unlike Free accounts, which have a 6-month usage window
- Free accounts may be suitable for students who can dedicate significant time over six months to learn AWS
- Be prepared to spend some money while learning AWS
- Running resources incurs costs because they operate on AWS hardware
- Learning to manage budgets and clean up unused resources early is essential for cloud engineers
- Develop the habit of checking billing for each service (e.g., search “AWS service X billing”)
- Experiencing a small billing surprise during learning is preferable to encountering unexpected costs in a real production environment
Demo: Creating an AWS Account
- A typical AWS account setup will include Multi-Factor Authentication (MFA), a budget alarm, and an
iamadminidentity (all explained later) - To sign up for an AWS account, visit: AWS Account Signup
- Enter your personal credentials and required information
- Paid accounts are recommended, although eligible users may choose a Free account (note that some features or services may be limited in Free accounts)
- When prompted to select a support plan, choose Basic Support – Free
- Enable IAM Access to Billing Information under the “Account” drop-down menu

- For the purposes of this course, it is recommended to use the Northern Virginia region (
us-east-1) to ensure access to all services and resources- Not all AWS regions provide the latest services and features
- Alternatively, you may select the region geographically closest to you for optimal performance
MFA (Multi-Factor Authentication)
Why Multi-Factor Authentication (MFA) is Needed
- Web-based authentication typically relies on usernames and passwords
- If these credentials are compromised, anyone could impersonate the account owner
- Authentication factors are distinct pieces of evidence used to verify identity:
- Knowledge: Something you know (e.g., username and password)
- Possession: Something you have (e.g., bank card, MFA device/app, U2F key)
- Inherent: Something you are (e.g., fingerprint, facial recognition)
- Location: Where you are (e.g., geographic coordinates, network/IP address)
- Security versus convenience trade-off:
- Using multiple factors increases security and makes impersonation more difficult
- However, additional factors require more time and effort during authentication
- Types of authentication based on the number of factors:
- Single-Factor Authentication (SFA/1FA): Uses only one factor
- Two-Factor Authentication (2FA): Uses two factors
- Multi-Factor Authentication (MFA): Uses more than one factor

MFA in AWS
- MFA can be activated for any user, such as the account root user or an
IAMADMINuser- AWS provides a secret key and additional setup information, typically via a QR code
- This information is entered into an MFA device or app, such as Google Authenticator
- Passkeys and password managers (e.g., 1Password, Proton Pass) can also serve as a second factor
- Once configured, the MFA code in the device or app refreshes periodically
- During authentication, both the user credentials (username and password) and the current MFA code are required, providing enhanced security
Demo: Securing an AWS Account with MFA
- Navigate to the account drop-down menu → Security Credentials → Assign MFA
- Select “Authenticator App” as the MFA device and follow the setup steps
- Any authenticator app can be used, including Google Authenticator, Authy, or 1Password
Budget Creation
AWS Free Tier
- Detailed information about the AWS Free Tier is available at: AWS Free Tier
- Some services offer free trials, others provide a monthly free-tier allowance, and certain services offer free usage indefinitely
- As discussed in Free vs Paid Accounts, eligible users can utilize a Free account. However, it imposes restrictions on accessible services and features and is valid for only six months
- AWS provides granular tools to track resource consumption in real time, such as AWS Cost Explorer
- Navigate to the drop-down menu → Billing and Cost Management → Bills
- View a summary of past bills and current-month billing
- Under Billing Preferences, it is recommended to check all available options for convenience

Creating a Cost Budget
- AWS Budgets are highly effective for monitoring expenditures and sending alerts when spending approaches defined thresholds
- Navigate to the drop-down menu → Billing Dashboard → Budgets → Create a Budget
- A “Zero Spend Budget” is particularly useful for remaining entirely within the Free Tier
- In this course, demos and labs are designed to stay within the AWS Free Tier to avoid costs.
- If unexpected charges occur, setting a budget will trigger alerts, reminding you to delete any resources that may be incurring costs
AWS IAM Fundamentals
Identity and Access Management (IAM) Service
- AWS IAM is a core AWS service responsible for managing identities
- It has three primary functions:
- Manages identities – IAM serves as an Identity Provider (IDP)
- Authenticates identities – Verifies that an entity is who it claims to be
- Authorizes identities – Grants or denies access to resources based on policies
- IAM is a free service
- Public and global service ensures data security and availability across all AWS regions
- It has three primary functions:
- IAM provides full control permissions, but these are limited to local identities within the account
- IAM cannot directly manage identities in external accounts
- Each AWS account has its own IAM instance, separate from other accounts, and the account fully trusts its own IAM instance
- IAM can perform nearly all account tasks, except for billing management and account closure, which are restricted to the account root user
- IAM also handles:
- Multi-Factor Authentication (MFA)
- Identity Federation, allowing external identities (e.g., web identities like Facebook or Google, or corporate Active Directory accounts) to access AWS resources indirectly
Root User of the Account
- The account root user is the default identity of an AWS account
- Linked to the account’s email address
- The account fully trusts the root user, granting full, unrestricted access
- The root user and the AWS account can be considered loosely equivalent
- The root user is not an IAM identity and is outside IAM control
- Best practice: Use the root user only for exceptional tasks that cannot be performed by other users
- Root User Privileges outlines these exceptional tasks
- Important: The root user of an AWS account (effectively the account owner, and not an IAM identity) should not be used for routine tasks, as it cannot be restricted.
- The root user is still required for certain critical tasks; this may be tested on exams
- Key privileges of the root user (memorize these):
- Change account settings (account name, email address, root user password, root user access keys)
- Close or delete the AWS account
- Change or cancel the AWS Support Plan
- Register as a seller in the Reserved Instance Marketplace
- Example: If you purchase a three-year reserved instance but only use it for two years, the unused capacity can be sold in this marketplace; root user access is required to register as a seller
- Additional privileges (not essential to memorize):
- Principle of least privilege: Grant identities only the permissions necessary for their tasks and restrict all other access

IAM Identities and Policies
- IAM allows the creation of additional identities within an account, called IAM identities:
- IAM users: Individual humans or applications that require long-term access to an account
- Each user represents a distinct entity
- Long-term credentials (username and password and/or access keys) are used
- IAM groups: Collections of IAM users, such as development, finance, or HR teams
- IAM roles: Used by AWS services or to grant temporary access to external entities
- Short-term credentials are used when the number of entities is uncertain
- IAM users: Individual humans or applications that require long-term access to an account
- IAM policies are documents attached to IAM identities that allow or deny access to AWS services and resources
- Policies are written in JSON
- Permissions granted by a policy are fully trusted by the account, similar to how the account trusts IAM

Demo: Adding an IAM Admin User to an AWS Account
- IAM users sign in via a sign-in URL:
https://<account-id>.signin.aws.amazon.com/console(account ID is a numeric string)- Account alias can be set for a more user-friendly URL (must be globally unique):
- Example:
https://<account-alias>.signin.aws.amazon.com/console
- Example:
- In this demo, a new IAM user with full administrative permissions is created, named
iamadmin- This allows the root user to be used only for exceptional tasks
- The root user cannot be restricted, deleted, or recreated, so normal administrative tasks should not be performed with it
- Creating the IAM user: Navigate to IAM → Users → Add Users
- Enter a username (unique within the account)
- Grant permissions during creation, e.g., assign the
AdministratorAccesspolicy toiamadmin- This policy provides full account access, except for a few privileges reserved for the root user, such as closing the account
- When signing in with
iamadmin, the username is displayed in the top-right corner of the console

- Ensure that the
iamadminuser is also secured with MFA
Logging into AWS Accounts
Three Methods for Accessing AWS Accounts, Services, and Resources
- AWS Management Console (Web UI)
- AWS CLI (Command Line Interface)
- AWS SDK (Software Development Kit)
AWS Management Console (Console UI)

- Web-based user interface accessible through a browser
- Requires login with a defined identity in the account: root user, IAM user, or IAM role
- Authentication is protected by password and optionally MFA
- Interacting with the Console UI is often referred to as “ClickOps”, since operations are performed primarily through clicks
- In contrast, Infrastructure as Code (IaC) and DevOps workflows rely on scripts, code, and automation to manage AWS services and resources
AWS CLI (Command Line Interface)

- Enables interaction with AWS services via commands executed in a shell or terminal using public APIs
- Requires installation of the AWS CLI on your system
- Installation instructions: AWS CLI Getting Started
- Open-source: AWS CLI on GitHub
- Requires IAM Access Keys
AWS SDK (Software Development Kit)
- Provides language-specific code libraries that can be integrated into application code or scripts
- Enables programmatic access and management of AWS services
- Requires IAM Access Keys
- Example: AWS CLI is built on the AWS SDK for Python
AWS CloudShell
- A terminal/CLI environment hosted within an AWS region and accessible directly from the Console UI
- Includes AWS CLI, preloaded credentials, and the selected region
- Provides a small storage repository for the account to store files if needed
- Not available in all regions
- Supported regions: AWS CloudShell Documentation

IAM Access Keys
Long-Term and Short-Term Credentials
- Credentials are pieces of information recognized by AWS and its identities that enable authentication, meaning access to an AWS account.
- Long-term credentials are persistent and do not rotate automatically or on a schedule.
- Examples include a username and password or IAM access keys.
- The owner of long-term credentials is responsible for changing them manually, such as updating a password.
- Short-term credentials are temporary and expire after a limited period.
- Identities using short-term credentials must regularly request new ones to maintain continued access.
- Long-term credentials are persistent and do not rotate automatically or on a schedule.
- The account root user and IAM users rely on long-term credentials, while IAM roles use short-term credentials.
- Credentials consist of both a public and a private component.
- For example, a username is public, while a password is private. MFA serves as an additional private authentication factor.
IAM Access Keys
- Access to AWS through the AWS CLI and AWS APIs is typically performed using IAM access keys.
- IAM access keys are long-term credentials provided by AWS.
- Each access key pair contains two components, both required for authentication:
- Access Key ID (public), for example:
AKIAIOSF0DNN7EXAMPLESecret Access Key (private), which is longer and more complex (for example:wJalrXUtnFEMI…)- Once generated, the secret access key is never displayed again by AWS.
- Example screenshot:
- Access Key ID (public), for example:
- Each access key pair contains two components, both required for authentication:

- Access keys can be created, deleted, deactivated, and reactivated.
- Newly created access keys are active by default.
- Access keys cannot be modified.
- Instead, they can be rotated by deleting and recreating them, which results in new keys.
- An IAM user may have:
- Zero or one username and password pair
- Some IAM users are intended solely for CLI or API access and therefore do not require a password.
- Zero, one, or two access key pairs
- Having two access keys supports safe key rotation.
- Zero or one username and password pair
- Although root users can generate access keys, doing so is strongly discouraged.
- The root user should not be used for routine tasks, making CLI or API usage unnecessary in most cases.
Demo: Creating Access Keys and Configuring AWS CLI v2
- Create access keys by navigating to: Drop-down menu → Security Credentials → Create Access Key
- Download and securely store the access keys locally, such as by using the CSV file option.
- Install AWS CLI v2:
https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html- After installation, entering the
awscommand in your terminal should display usage instructions, confirming a successful setup. - Run
aws --versionto verify that version 2 or later is installed.
- After installation, entering the
- AWS CLI v2 supports profiles for managing multiple configurations.
aws configuresets up the default profile.- Providing a profile name creates a named profile.
- Example:
aws configure --profile iamadmin-general- Configures a named profile for the
iamadminuser in the general AWS account. - Requires entering the access keys, a default region (such as
us-east-1), and a default output format (None).
- Configures a named profile for the
- Use
aws s3 lsto list all S3 buckets in the account.- This is a commonly used command to confirm successful CLI access to an AWS account.
- When using a named profile, it must be explicitly specified:
aws s3 ls --profile iamadmin-general
- Once the AWS CLI profile is configured correctly, you may safely delete any downloaded CSV files containing access keys.
AWS Fundamentals
Public and Private AWS Services
AWS Networking Ecosystem
- There are three distinct networking zones in AWS:
- AWS Private Zones
These consist of customer-managed private networks, known as Virtual Private Clouds (VPCs).- VPCs are isolated and private by default
- This isolation applies even between different VPCs.
- External connectivity to a VPC can be explicitly configured:
- Access from on-premises environments using VPN or AWS Direct Connect (DX)
- Connectivity to the AWS public zone through components such as an Internet Gateway (IGW)
- Access to the public internet through an IGW
- Similar to a laptop connected to a VPN, a VPC only communicates externally when such connectivity is intentionally configured.
- AWS private services require a VPC to deploy resources
- For example, Amazon EC2 launches virtual servers (instances) within a VPC.
- VPCs are isolated and private by default
- AWS Public Zone
This is an AWS-managed network that is connected to the public internet.- It operates between VPCs and the public internet.
- AWS public services run in this zone, such as:
- AWS Identity and Access Management (IAM)
- Amazon Route 53
- Amazon S3
- These services expose public endpoints that are reachable from the internet.
- AWS public services do not require a VPC.
- When a VPC communicates with a public AWS service (for example, Amazon S3), traffic may pass through an IGW while never actually traversing the public internet.
- Public Internet Zone
This represents the traditional global internet, the publicly accessible network of networks.
- AWS Private Zones
- The distinction between AWS private services and AWS public services applies only to networking.
- It is not related to authentication or authorization.
- AWS public services do not provide unrestricted access.
- They simply expose public endpoints.
- Any entity accessing those endpoints must still present valid credentials or meet authorization requirements.
- By default, only the account root user can access public services. Other identities must be explicitly granted permissions.
- AWS Networking Zones

High Availability (HA), Fault-Tolerant (FT) Systems & Disaster Recovery (DR)
High Availability (HA)
- A highly available (HA) system is designed to ensure a specified level of operational performance, usually measured as uptime, over an extended period.
- HA focuses on maximizing system uptime.
- When a component fails, it can be replaced or repaired quickly, often through automation.
- HA systems incur additional costs compared to standard systems due to required redundancy and automation.
- HA focuses on maximizing system uptime.
- Examples include:
- Maintaining a spare physical server to replace a failed primary server.
- Automatic failovers, where a standby or replica server replaces a failed instance.
- Important: HA does not prevent failures or outages, nor guarantee uninterrupted user experience.
- HA systems may experience downtime, but it is shorter than in non-HA systems.
- Users may experience brief disruptions while the system recovers.
- In essence, HA is about rapid and automated recovery from failures.
- Real-life analogy: carrying a spare tire during a desert trip. Changing a flat is a disruption, but it is much faster and safer than waiting for external assistance.
HA Summary:

Fault Tolerance (FT)
- A fault-tolerant (FT) system continues to operate properly even if some of its components fail.
- The system must stop using faulty components and reroute traffic automatically, ensuring uninterrupted operation.
- FT systems are more robust and costly than HA, as they must tolerate failures without downtime.
- Example: a hospital heart monitoring system with redundant servers. Any downtime could endanger lives.
- Real-life analogy: an airplane with redundant engines and electronics must continue flying even if one fails. Repairs cannot occur mid-flight.
FT Summary:

Disaster Recovery (DR)
- Disaster Recovery (DR) is a set of policies, tools, and procedures designed to restore or maintain critical IT systems and infrastructure following a natural or human-induced disaster.
- Essentially, DR addresses what to do if HA and FT fail.
- DR planning involves two phases:
- Pre-disaster planning: preparing for potential system disruptions.
- Post-disaster recovery: restoring access to systems, data, and infrastructure.
- Modern DR processes are highly automated to minimize human error.
- Business continuity (BC) focuses on maintaining operations during a disaster, while DR focuses on restoring IT functionality afterward.
- Effective DR plans require:
- A prioritized plan to protect critical assets for recovery.
- Investment in infrastructure resilience:
- Additional hardware, servers, virtual machines, or instances.
- Backups, stored offsite to prevent loss during a disaster.
- Investment in knowledge:
- Comprehensive documentation of critical procedures, credentials, and resources.
- Staff training, including periodic DR drills or dry runs.
- Real-life analogy: a plane’s ejection system (parachutes) prioritizes human life over replaceable assets.
DR Summary:
Global AWS Infrastructure
AWS Global Network

- AWS infrastructure is distributed globally and organized into collections of interconnected infrastructure, linked with high-speed global networking.
- Ref: AWS Global Infrastructure
- The AWS global network is continuously expanding and evolving.
- Three primary types of AWS infrastructure groupings:
- Regions
- Availability Zones (AZs)
- Edge Locations / Points of Presence (PoPs)
AWS Infrastructure Groupings
AWS Region
- A region is a geographical area with complete AWS infrastructure, including compute, storage, databases, AI, analytics, and more.
- Example: Asia Pacific (Sydney) region, or
ap-southeast-2region- “Asia Pacific (Sydney)” = region name ↔
ap-southeast-2= region code
- “Asia Pacific (Sydney)” = region name ↔
- Regions are not equivalent to countries, states, or continents; they are defined by AWS for infrastructure deployment.
- Example: Asia Pacific (Sydney) region, or
- Regions are interconnected at high speeds, supporting designs that can withstand global disruptions.
- If one region experiences an outage, services can continue in another region.
- Non-global services require specifying a region; global services (e.g., IAM) do not.
Benefits of Regions:
- Geographic separation → Isolated Fault Domain
- Outages in one region do not affect others.
- Geopolitical separation → Compliance & Governance
- Data remains subject to the laws of the country where the region is located.
- Data is not transferred between regions unless explicitly configured.
- Location control → Performance
- Deploying resources closer to users reduces latency and improves performance.
Factors to consider when selecting regions:
- Compliance with data regulations (e.g., EU data residency requirements).
- Latency and performance for end users.
- Service availability (not all services are available in all regions).
- Reference: AWS Regional Services
- Pricing differences across regions, though most services have comparable costs globally.
AWS Availability Zone (AZ)
- AZs are subdivisions within a region, typically 3–6 per region.
- Example:
ap-southeast-2a,ap-southeast-2b,ap-southeast-2cin theap-southeast-2region.
- Example:
- AZs provide isolation within a region, including compute, storage, networking, power, and facilities.
- Services can fail in one AZ but continue operating in others if designed for AZ resilience.
- Important: AZs are not single data centers (DCs).
- Each AZ may contain one or more DCs, each with redundant power, networking, and connectivity, geographically separated to improve resilience.
- High-speed, low-latency connectivity exists between AZs in a region.
- Some services (e.g., VPC) are deployed across multiple AZs for resilience.
AWS Edge Location / Point of Presence (PoP)
- Edge locations are local distribution points for faster and more efficient data transfer.
- Much smaller than regions.
- Useful for applications like Netflix to store content near customers, reducing latency.
- Typically consist of a few racks in third-party data centers.
- Mostly used for storage (e.g., CloudFront caching) with occasional compute resources.
- Supports edge computing scenarios.
- Colloquially, edge locations and PoPs are treated as synonyms, though technically they may differ.
Resilience of an AWS Service
- Global resilience
- Service data is replicated across multiple regions, allowing continued operation even if one region fails.
- Examples: IAM, Route 53.
- Regional resilience
- Service operates within a single region, with data replicated across all AZs in that region.
- Can tolerate individual AZ outages but will fail if the entire region is down.
- Examples: VPC, S3
- S3 buckets must be globally unique, but the data resides in a region, making S3 regionally resilient.
- AZ resilience
- Service operates within a single AZ and is more prone to failure.
- Some services are designed to be highly available within a single AZ.
- Examples: EC2, RDS
Introduction to Amazon S3 (Simple Storage Service)
Amazon S3 – Key Concepts
- Amazon S3 is AWS’s default storage service.
- It provides object storage (not file storage or block storage).
- Stores objects (data) within buckets (containers of objects).
- Ideal for hosting large datasets such as movies, audio, images, text, and unstructured data.
- Cost-effective
- Accessible via AWS Console, CLI, API, or HTTP(S)
- Public service, supports unlimited storage and multiple users.
- Many AWS services use S3 as the default data input/output platform.
- It provides object storage (not file storage or block storage).
- S3 is globally accessible, but data is regionally based.
- Bucket names must be globally unique.
- Data is stored in a specific region and replicated across that region’s AZs.
- As an object store:
- Not a file store – you cannot browse S3 like a traditional filesystem.
- Use Amazon EFS or Amazon FSx for file storage.
- Not a block store – you cannot mount an S3 bucket as a drive (e.g.,
K:\or/images).- Use Amazon EBS for block storage.
- Not a file store – you cannot browse S3 like a traditional filesystem.
S3 Objects

- Objects are roughly analogous to files but are technically different.
- Components of an object:
- Key – identifies the object within a bucket (e.g.,
koala.jpg), similar to a filename. - Value – the data or contents of the object, from 0B up to 5TB.
- 5TB is the maximum size for an S3 object.
- Other components: Version ID, metadata, access control list (ACL), and subresources.
- Key – identifies the object within a bucket (e.g.,
- Objects cannot exist outside a bucket.
S3 Buckets

- S3 bucket = container for S3 objects.
- Created in a specific region, ensuring data sovereignty.
- Can store an unlimited number of objects, making S3 highly scalable.
- Bucket names must be globally unique across all regions and accounts.
- Example:
koaladata - ARN example:
arn:aws:s3:::koalacampagin13333337
- Example:
- Bucket naming rules:
- 3–63 characters, all lowercase, no underscores
- Must start with a lowercase letter or number
- Cannot be formatted like an IP address (e.g.,
1.2.3.4)
- S3 uses a flat structure – there are no true folders or directories.
- AWS Console may display objects with prefixes as folders, but all objects are stored at the same level.
- Example: A bucket contains objects
flashcards.html,index.html,notes.html,/images/badges.jpg. The Console shows/imagesas a folder, but it is actually a prefix, not a directory.
- Bucket limits per account:
- Default limit: 10,000 buckets per AWS account.
- Requests beyond 10,000 require AWS support approval.
- This limit affects architecture decisions (e.g., one bucket per user may not scale).
- Default limit: 10,000 buckets per AWS account.
- Buckets are private by default.
- AWS includes a public access block by default, preventing accidental public exposure.

- Disabling the block does not make the bucket public automatically; explicit configuration is still required.
Introduction to Amazon VPC (Virtual Private Cloud)
Amazon VPC – Key Concepts
- Amazon VPC allows you to create and manage virtual private networks (VPNs) inside an AWS account.
- A Virtual Private Cloud (VPC) is a virtual private network within AWS.
- VPC CIDR defines the IP address range for the VPC (e.g.,
172.31.0.0/16).
- VPC CIDR defines the IP address range for the VPC (e.g.,
- Most AWS services and resources, particularly private services, run inside VPCs (e.g., EC2 instances).
- A Virtual Private Cloud (VPC) is a virtual private network within AWS.
- VPCs exist within a single AWS account and a single region, making them regionally resilient.
- VPCs can create subnets, which are smaller network segments in different Availability Zones (AZs) of the region.
- If one AZ fails, the VPC can continue to operate in the other AZs.
- Subnet CIDR is a portion of the VPC CIDR and cannot be modified once configured.
- VPCs can create subnets, which are smaller network segments in different Availability Zones (AZs) of the region.
- Types of VPCs:
- Default VPC (0–1 per region)
- Preconfigured by AWS with a static setup.
- Custom VPCs (0+ per region)
- Fully configurable and private by default.
- Default VPC (0–1 per region)
- Custom VPCs are private and isolated by default.
- They cannot communicate with any external networks, including other VPCs, unless explicitly configured.
- Communication with external entities requires additional configuration:
- Other VPCs
- On-premises networks in hybrid setups
- Other cloud platforms in multi-cloud deployments
- Public internet
- The Default VPC does not follow this strict privacy rule.
Default VPC
- Automatically created by AWS with a standard, pre-configured setup.
- Predictable and useful for quick testing.
- Less flexible, so not ideal for production environments.
- Default VPC CIDR is always
172.31.0.0/16. - Each region can have 0–1 Default VPC.
- Can be deleted and recreated.
- Creating a VPC manually always creates a Custom VPC, not a Default VPC.
- Some AWS services assume the Default VPC exists, so it is usually recommended to keep it.
- Can be deleted and recreated.
- A /20 subnet is automatically deployed in each AZ in the region.
- Example:
us-east-1(N. Virginia) has 6 AZs, so the Default VPC has 1 subnet per AZ.
- Example:
- Preconfigured with:
- Internet Gateway (IGW)
- Security Groups (SG)
- Network ACLs (NACL)
- By default, resources in the Default VPC are assigned public IPv4 addresses, making them accessible from the public internet.
- Unlike Custom VPCs, the Default VPC is not private or isolated by default.

Amazon EC2 101: Basics of Elastic Compute Cloud
Amazon EC2 – Key Concepts
- Amazon EC2 is AWS’s default compute service.
- It is an IaaS (Infrastructure-as-a-Service) offering, where the operating system (OS) is the primary unit of consumption.
- Customers provision instances, which run on physical EC2 hosts.
- Instances are also called virtual machines (VMs) or virtual servers (VSs).
- Customers manage instances, while AWS manages the physical hosts (exception: dedicated hosts managed by customers).
- Instances run inside VPC subnets, making EC2 private by default and resilient within an AZ.
EC2 Instances
- An EC2 instance is essentially a virtual machine (VM).
- Customers provision the OS and can configure the runtime environment (RTE), databases, and applications inside the instance.
- Instance size and capabilities are defined at launch, though some features can be adjusted after launch.
- Billing model: On-Demand, charged per second for the resources used.
- Networking: Instances are deployed in VPC subnets. Public access must be explicitly configured.
- Storage options:
- Local block storage on the host (Instance Store)
- External block storage via Amazon EBS (Elastic Block Store)
EC2 Instance States
- The state indicates the condition of an instance. Core states include:
- Running (Active) – billed for CPU, memory, networking, and storage.
- Stopped (Inactive) – billed only for storage; can be restarted.
- Terminated (Deleted) – permanently deleted and cannot be restarted; no further charges apply.
EC2 Instance States

Connecting to EC2 Instances via SSH
- SSH (Secure Shell) is a secure protocol for connecting to Linux instances.
- Uses port 22 and authentication via SSH key pairs (private + public key).
- Private key is downloaded once and stored locally (e.g.,
A4L.pem). - Public key is stored on the EC2 instance by AWS.
- Private key is downloaded once and stored locally (e.g.,
- Uses port 22 and authentication via SSH key pairs (private + public key).
- Once connected, a terminal is available for managing the instance.
- Private key permissions must allow only the owner to read it:
chmod 400 A4L.pem(Mac/Linux)- AWS rejects connections if permissions are incorrect.
SSH Connection

Connecting to Older Windows Instances via RDP
- Windows OS versions prior to Win10 do not natively support SSH.
- EC2 instances with older Windows require RDP (Remote Desktop Protocol) on port 3389.
- SSH keys are used to authenticate to the local administrator password, allowing access via RDP.
Amazon Machine Image (AMI)
- AMI is an image of an EC2 instance.
- Contains a disk image, kernel, and VM configuration.
- Can be used to launch new EC2 instances.
- Can be created from an existing instance as a snapshot, capturing the OS and installed software.
- Components of an AMI:
- Permissions: define which accounts can use the AMI.
- Public AMI: anyone can launch instances.
- Private AMI: owner-only by default; specific AWS accounts can be allowed.
- Root/Boot Volume: the storage drive that boots the OS (
C:in Windows,/in Linux). - Block Device Mapping: defines how storage volumes are presented to the OS.
- Permissions: define which accounts can use the AMI.
AMI Overview

Introduction to Amazon CloudWatch (CW)
Amazon CloudWatch – Components and Architecture
CloudWatch Components

- CloudWatch collects and manages operational data, providing monitoring and operational management.
- Operational data includes service performance, metrics, logs, and more.
- It is a core support service used by most AWS services.
- CloudWatch is a public AWS service.
Main Components of CloudWatch
- CloudWatch Metrics
- Provides the core metrics service.
- Examples: CPU utilization of an EC2 instance, disk usage of an on-premises server.
- Can collect metrics from AWS services, custom applications, or on-premises systems.
- Some metrics are gathered natively by AWS.
- A CloudWatch Agent is required to collect:
- Non-native AWS metrics (e.g., internal processes in EC2 instances)
- Metrics from outside AWS
- Metrics should be organized and separated to avoid confusion.
- CloudWatch Logs
- Collects logs from AWS services, applications, or on-premises infrastructure.
- Some logs are generated natively; others require the CloudWatch Agent.
- CloudWatch Alarms
- Trigger notifications (via Amazon SNS) or events based on monitored metrics.
- Example: Send an SMS when an EC2 instance’s CPU usage exceeds 90%.
- Billing alarms are also created in CloudWatch, sending notifications when costs exceed a threshold.
- CloudWatch Events (now Amazon EventBridge)
- Integrates with AWS services and scheduled events.
- Generates events that can trigger actions, such as sending notifications.
- Events are generated based on:
- Conditions (e.g., EC2 instance creation or termination)
- Schedules (e.g., specific times or recurring schedules)
Amazon CloudWatch – Key Concepts

- Datapoint = combination of a timestamp and a value.
- Example: CPU usage = 98.3% at 08:45:45 on 2019-12-03.
- Metric = a time-ordered sequence of datapoints.
- Examples: CPU usage, network I/O, disk I/O.
- Metrics are not necessarily tied to a single instance; for example, CPU usage may represent all EC2 instances by default unless filtered.

- Namespace = container for related monitoring data.
- Organizes metrics to avoid clutter.
- Can use any valid naming convention.
- Example:
AWS/contains all AWS metrics, andAWS/EC2contains all EC2-related metrics.
- Dimensions = criteria to separate datapoints of the same metric into different perspectives.
- Example: Within
AWS/EC2, dimensions may separate metrics for Instance A and Instance B. - Dimensions are flexible and powerful for filtering and analysis.
- Example: Within

- Alarms perform actions when a metric reaches a specified threshold.
- Example: Send a notification when costs exceed a budget.
- States:
INSUFFICIENT DATA– initial stateOK– metric is within the thresholdALARM– threshold exceeded; alarm triggers actions- Notifications are sent using Amazon SNS when the state is
ALARM.
Introduction to AWS CloudFormation (CFN)
IaC Basics and AWS CloudFormation
- IaC (Infrastructure as Code)
- Enables creation, updating, and deletion of infrastructure using code or templates.
- Code and templates are consistent and repeatable, which:
- Reduces human errors
- Speeds up provisioning and deletion compared to manual methods
- AWS CloudFormation (CFN)
- AWS’s official IaC service.
- Templates are written in YAML or JSON to interact with AWS infrastructure.
- External IaC tools like Terraform or AWS CDK are popular; they translate into CFN templates to manage AWS resources.
CFN Templates – Components and Examples
- By default, CFN templates are stored in an S3 bucket with the prefix
CF.- Do not confuse CFN (CloudFormation) with CF (CloudFront).
- Resources – AWS resources to create, update, or delete.
- Examples: VPCs, S3 buckets, EC2 instances.
- Mandatory component – a template without resources does nothing.
- Resources in templates are logical resources, not physical.
- AWSTemplateFormatVersion – Version of the template.
- Description – Optional text explaining the template.
- Must appear after
AWSTemplateFormatVersionif included.
- Must appear after
- Metadata – Defines how the template appears in the AWS console.
- Parameters – Fields prompting users to provide required input values.
- Mappings – Key-value pairs used for lookups within the template.
- Conditions – Define criteria for resource creation (e.g., create a resource only if in a PROD environment).
- Outputs – Messages or values returned when the template is applied (e.g., “EC2 instance created”).
- Intrinsic Functions – Built-in functions used in templates:
LatestAmiId– Fetch the most recent AMI in a region.!Ref– Reference an existing resource.!GetAtt– Retrieve a specific attribute from a resource.
- Template Examples (YAML & JSON): CFN Template Examples
CFN Stacks

- A CFN template contains logical resources and other components.
- CFN Stack – The active representation of all resources defined in a template.
- Created from a template.
- Can be executed in an AWS account to create, update, or delete infrastructure.
- Each logical resource in the stack corresponds to a physical resource in AWS.
Syncing Logical and Physical Resources
-
- Physical Resource – Exists in AWS infrastructure and is visible in the console.
- Example: Running EC2 instance with ID
i-1234567890abcdef0.
- Example: Running EC2 instance with ID
- Logical Resource – Defined in CFN templates and stacks.
- Does not exist outside of CFN templates/stacks.
- Includes a type (e.g.,
AWS::EC2::Instance) and properties (e.g.,ImageID,KeyName). - Can be synced to the corresponding physical resource.
- CloudFormation’s Role
- Keeps logical and physical resources synchronized.
- Automates infrastructure management.
- Allows approvals before committing changes.
- Enables quick deployment of one-off resources.
- Deleting a stack removes both logical and physical resources, ensuring automatic cleanup.
- Widely used in labs and demos for SAA-C03 preparation.
- Physical Resource – Exists in AWS infrastructure and is visible in the console.

Introduction to AWS Lambda
AWS Lambda – Key Concepts
- Function-as-a-Service (FaaS) – designed for short-lived, focused code execution.
- A Lambda function is the unit of code executed by AWS Lambda.
- “A Lambda” is commonly used to refer to “a Lambda function.”
- Each function must specify a Runtime Environment (RTE) (e.g., Python 3.8) before execution.
- Memory is directly configured, while vCPU is indirectly determined based on memory.
- When triggered, the function executes in the selected RTE.
- Billing is based only on the compute consumed during execution.
- Ideal for serverless and event-driven architectures (EDA).
- Cost-effective: the first million invocations are free under AWS Free Tier, and subsequent invocations are inexpensive.
AWS Lambda – Architecture

- A Lambda function consists of code, configuration, and runtime package.
- Includes programming language, deployment package (downloaded and executed at runtime), and allocated resources.
- Colloquially, “Lambda” may refer only to the code, but the function is more than just the code.
- Supported Runtimes: Python, Ruby, Java, Node.js, and more.
- Lambda Layers allow custom runtimes (e.g., Rust via community support).
- Invocation Behavior:
- Each invocation creates a new RTE. Code is loaded, executed, and terminated.
- Subsequent invocations usually start with a fresh RTE, although some configurations can reuse RTE components.
- Functions are stateless; no data persists between invocations.
- Docker Considerations:
- Traditional Docker is considered an anti-pattern for Lambda.
- Lambda supports Docker images, but these are specialized for Lambda, not standard containerized environments.
- Resource Allocation:
- Memory: 128MB–10240MB (1MB increments).
- vCPU: 1 vCPU per 1769MB of memory (scales linearly with memory).
- Temporary disk: 512MB mounted at
/tmp, can scale up to 10240MB. Data is wiped on each invocation.
- Timeout: Maximum 900 seconds (15 minutes). Functions requiring longer execution should use services like AWS Step Functions.
- Execution Role: IAM role that governs permissions and security for the function.
AWS Lambda – Common Use Cases
- Serverless Applications: e.g., S3 + API Gateway + Lambda
- File Processing: e.g., watermarking images uploaded to S3
- Database Triggers: e.g., DynamoDB Streams invoking Lambda on data changes
- Scheduled Tasks: Using EventBridge or CloudWatch Events to run periodic functions
- Real-time Data Processing: e.g., Kinesis Data Streams triggering Lambda functions
Demo: Create and Execute a Lambda Function
- Deploy CloudFormation Stack to spin up two EC2 instances:
- Create Execution Role in IAM or during Lambda creation.
- Example JSON for EC2 start/stop permissions:
{
“Version”: “2012-10-17”,
“Statement”: [
{
“Effect”: “Allow”,
“Action”: [
“logs:CreateLogGroup”,
“logs:CreateLogStream”,
“logs:PutLogEvents”
],
“Resource”: “arn:aws:logs:::” }, { “Effect”: “Allow”, “Action”: [ “ec2:Start“,
“ec2:Stop” ], “Resource”: ““
}
]
}

- Create Lambda Function:
- Provide a name and select runtime (e.g., Python 3.9).
- Assign the execution role created in step 2.
- Add Code to the function:
Stop EC2 Instances Example:

import boto3
import os
region = ‘us-east-1’
ec2 = boto3.client(‘ec2’, region_name=region)
def lambda_handler(event, context):
instances = os.environ[‘EC2_INSTANCES’].split(“,”)
ec2.stop_instances(InstanceIds=instances)
print(‘Stopped instances: ‘ + str(instances))
- Set Environment Variables:
- Include
EC2_INSTANCESwith instance IDs (comma-separated).
- Include
- Test the Function:
- Click “Test” in the console; verify EC2 instances stop.
- After the function executes successfully, output will be displayed in the console and the EC2 instances will be stopped. Confirm this by checking the EC2 console.
- Create another function using the same approach to start the EC2 instances. Run or test the function and verify in the EC2 console that the instances have started.
Start EC2 Instances Example:

import boto3
import os
region = ‘us-east-1’
ec2 = boto3.client(‘ec2’, region_name=region)
def lambda_handler(event, context):
instances = os.environ[‘EC2_INSTANCES’].split(“,”)
ec2.start_instances(InstanceIds=instances)
print(‘Started instances: ‘ + str(instances))
9. Clean-up: delete the created functions, then delete the CloudFormation stack
Introduction to Amazon Route 53 (R53)
Amazon Route 53 (R53) – Core Concepts
- DNSaaS (DNS as a Service): an AWS-managed DNS offering
- A global service
- Uses a single database that is replicated and accessible across all regions
- Designed to be globally resilient
- No region selection is required in the AWS console
- Two primary features:
- R53 Registered Domains
- Route 53 can function as a domain name registrar
- R53 Hosted Zones
- Route 53 can also act as a DNS hosting provider
- R53 Registered Domains
- Note that in addition to domain registration and renewal costs, there are charges for maintaining hosted zones
R53 Registered Domains

- Route 53 maintains relationships with major TLD registries (such as
.com,.io,.net)- For example, PIR (Public Interest Registry) manages the
.orgTLD
- For example, PIR (Public Interest Registry) manages the
- Process for registering a new domain (for example,
animals4life.org):- Route 53 checks whether the domain name is available
- If available, the customer agrees to the terms and purchases the domain through Route 53
- Route 53 creates a ZoneFile, which stores the domain’s DNS data
- Route 53 assigns AWS-managed name servers for the DNS zone
- Always four name servers
- A hosted zone is created
- The ZoneFile is stored across the four assigned name servers
- Entries are created in both Registered Domains and Hosted Zones referencing these servers
- Route 53 communicates with the TLD registry (for example, PIR for
.org)- The TLD’s NS records are updated to point to the Route 53 name servers
- These four servers become authoritative for the domain
- Registering a domain is not required to complete CLF-C02 or SAA-C03 coursework. You can simply observe the demonstrations. However, for projects such as the Cloud Resume Challenge, owning a domain is recommended and often necessary.
- Transfer lock (enabled by default)
- Prevents the domain from being transferred out of Route 53
- If a hosted zone is deleted and recreated, the name server records in Registered Domains must be updated to reference the new servers
- Failure to do so will cause DNS resolution issues
R53 Hosted Zones
- Route 53 stores DNS zones across four AWS-managed name servers
- These servers hold DNS records (RRSETs)
- Network visibility options:
- Public Hosted Zones
- ZoneFile is publicly accessible
- Part of the global DNS system
- Reachable from the public internet
- Private Hosted Zones
- ZoneFile is private
- Associated with specific VPCs
- Accessible only within those VPCs
- Commonly used for internal or sensitive DNS records
- Public Hosted Zones
- Costs:
- Monthly fee for each hosted zone
- Small charge per DNS query
- Query costs can become significant for high-traffic environments and should be monitored