AWS Solutions Architect Associate Exam Study Notes

I took the exam for the AWS Solutions Architect Associate certification on December 6, 2021.

Studying

I started studying using A Cloud Guru. I already had a membership due to their acquisition of Linux Academy. Just by coincidence I ran across /r/AWSCertifications and saw that people were having success with Adrian Cantrill’s class. I heard it was way more in depth and I was thinking I might want to do the Professional cert later so the extra would come in handy. It is much better than the ACG material. I don’t think I would have been able to pass using ACG alone. In addition Adrian’s labs are head and shoulders above any of the other offerings both in quality and quantity. Most sections of material have multiple labs, sometimes four or five. You start off creating two AWS accounts with MFA and budgets and alarms set up so you’re not worried about spending a bunch of money or getting hacked. That’s a great initial impression to make.

Besides the classes I also read a lot of AWS documentation, include the FAQs for all of the services. If there’s anything I had a question in my mind about on how it worked, whether it was covered in one of the lectures I read about it in the documentation. AWS has amazing documentation. I work in software so I know how hard it is to write good technical documentation and AWS’ is impressive.

My approach to practice tests is the more the better, so I did a bunch of them:

The Digital Cloud (Neal Davis) and Tutorials Dojo exams are roughly the same in terms of problem complexity, but Tutorials Dojo has a much broader set of questions. Sets 1 through 6 were all unique questions. The final test was a sampling of questions from the others. Each Digital Cloud exam I took had more repeat questions – from a third to a half of the questions were repeats. The Pearson practice test was really bad. It had old questions and obviously wrong answers. My only complaint about the TD exams was that they were almost too broad. For example, questions about CodeDeploy. There was a question about de-duplicating messages from a standard SQS queue and one of the answers was “Replace Amazon SQS and instead, use Amazon Simple Workflow service.” What the heck is that? (After reading about SWF I was amazed that was an answer). My only complaint about Adrian Cantrill is that he can be unprofessional in his interactions on his Slack. His lectures are outstanding. When I’m ready to tackle the SA Pro cert I will be going straight to his class. He has a gift for explaining concepts and his excitement for the material shines through.

Taking the exam

I finally had enough of studying and scheduled the exam. I took the Pearson online version since I’ve had experience with PSI taking exams for some Kubernetes certs. I definitely prefer Pearson.

It was harder than I expected. There was stuff that I didn’t know. For example what’s the difference between Compute Savings Plans and EC2 Instance Savings Plans? What? Also details about encryption at rest using SNS and SQS. There were a lot of multi-regional questions. DynamoDB a few times, API Gateway, a number of Aurora questions, RDS, S3, EC2, ELB, CloudFront. Make sure you know all of that stuff backwards and forwards.

Whether or not I’ll actually go for the Professional cert – ask me in six months when I start getting itchy for a new challenge 🙂

Notes

If there is anything that is glaringly incorrect please let me know in the comments. However, I’m not planning on keeping it up to date as AWS adds more functionality and the exam changes.

API Gateway

  • Managed API endpoints
  • Public service
    • Integrated with CloudFront and WAF
    • Also have Regional Endpoints not tied to CloudFront
  • APIs are versioned
  • Can send events to Lambda
  • Can proxy AWS services like DynamoDB
  • Publish REST APIs to Amazon Marketplace and monetize
  • Generate documentation and SDKs
  • Automatically scales to handle amount of traffic
  • Can be used directly for serverless architecture or for evolving to one
    • For example
      • First put API gateway in front of your monolith running in EC2 or on-prem
      • Next replace the monolith with microservices + RDS
      • Finally replace micro services with Lambda + DynamoDB
  • Secure front door to external communication from internet
    • Everything is HTTPS
      • Integrates with ACM
      • Can use custom domain
    • Handles authentication + authorization
      • Signed requests
        • Signed using API’s SDK
      • Cognito
      • Lambda authorizers
      • IAM
    • Automatic DDoS protection
      • Layer 4 and 7
    • Optional WAF
    • Optionally provide client certificate to backend
    • API calls and all control plan activity logged to CloudTrail
  • Charged for 
    • # of API calls
    • Data transfer
    • Caching
    • Websockets: per message + connection time
  • API types
    • All support authz
    • HTTP APIs
      • APIs that proxy to Lambda, DynamoDB, SNS or EC2 applications
      • No management functionality
      • Large scale workloads
      • Latency sensitive workloads
    • REST APIs
      • Add on management functionality like
        • Usage plans
          • Throttling / quotas
        • API Keys
        • Caching
        • Publishing / monetizing
    • Websocket APIs
      • Persistent bidirectional communication
      • API Gateway manages the connection to the client and calls the backend on events
        • Messages, connection
      • Backend doesn’t need to have any Websockets-specific logic
      • Backends can be Lambda, Kinesis, custom, etc
      • Callback URL is generated for each new client
        • Backend can use this to send messages to the client, disconnect it, etc
  • Components of an API
    • Resource
      • Typed object
      • Can has associated data model
      • Can have relationships to other resources
    • Method
      • HTTP method like GET, POST, etc
    • Route
      • Combination of method and URL path
    • Stage
      • Environment like Development or Production
      • Has variables, like environment variables
      • Each stage has a separate domain name
    • Resource policy
      • Defines who can use the API and from where
  • Swagger / OpenAPI
    • Can be used to define APIs or document them
  • Private APIs called from a VPC
    • Create VPC Interface endpoint
      • Service: com.amazonaws.region.execute-api
    • Each endpoint can access multiple APIs
    • Can also use Direct Connect to access private APIs
    • To restrict and grant access to your VPC create a Resource Policy with aws:SourceVpc and aws:SourceVpce conditions
  • Private integrations calling a VPC
    • Create Network Load Balancer with target group pointing to VPC backend
      • For example, an ASG
    • Create a VPC link using the API Gateway
      • This is an interface endpoint in the API Gateway’s VPC
      • With the other end being
        • NLB in VPC for REST APIs (uses PrivateLink)
        • ENI in VPC for HTTP APIs (uses VPC-to-VPC NAT)
          • Allows you to connect to any target in VPC
  • Throttling
    • Per API key per method per stage
    • Per API key
    • Per method
    • Per region per account
  • Caching
    • Optionally defined per stage in GB
    • You define cache keys and TTLs
    • Can invalidate using API
  • Request flow
    • Authorize request (optional)
      • Use Cognito, IAM or custom Lambda authorizer
      • Throttle requests from a user using API keys
    • Validate request
      • Use Method Requests 
      • Can require certain headers or query strings
      • Can validate the request body using JSON schema (request model)
    • Proxy request (optional)
      • Proxy methods grab http paths to send to the backend
      • Proxy integrations forward request to backend
        • HTTP proxy
        • Lambda proxy
      • Service proxies connect directly to AWS services
        • DynamoDB, Lambda, SNS
    • Transform request
      • Only done if not HTTP/Lambda proxy
      • Use Velocity templates to map request to different format
        • Or use passthrough
    • Error handling
      • Handle errors before reaching backend
      • Can customize error response using simple templates
    • Transform response
      • Only done if not HTTP/Lambda proxy
      • Optionally rewrite status code/message
      • Use velocity templates to map response to different format
    • Final response 
      • API gateway only returns 200 OK by default.  
        • Change this if desired based on status code from previous step
      • Specify a response model for use with Swagger, etc

App Runner

  • Designed for tech-unsavvy customers
  • Provide either
    • Container image
    • Source code repository + build/start commands
  • AWS will run it in their infrastructure
    • Scales automatically
    • No access to physical box (a la Fargate)

AppSync

  • Manage and synchronize mobile app data in real time
  • Allows data to be used on mobile when offline
  • Uses GraphQL to limit the amount of data transferred from AWS to mobile
  • Sources
    • DynamoDB
    • Lambda
    • Elasticsearch
  • Create resolver templates to map data

Autoscaling

  • For EC2
  • Free!
  • Vital for High Availability architectures
    • Spread the ASG across multiple AZs
    • Use ELBs
  • Most important settings
    • Min 
    • Max 
    • Desired – how many instances do you want right now?
      • Min <= Desired <= Max
      • ASG keeps number at desired by launching or terminating instances
  • Linked to a VPC
    • Tries to balance # of instances in each AZ
  • Scaling policies
    • Manual scaling
      • Manually adjust the desired capacity
      • Min/Max not used
    • Dynamic scaling
      • Measure, then act
      • Simple
        • Choose scaling metric and threshold values for CloudWatch alarms
        • Example
          • CPU above 50%: add 2
          • CPU below 50%: remove 2
      • Stepped
        • Allows you to act in a more extreme way on metric values
        • Define ranges of metric values with adjustment values
        • Example
          • Adjustment type: PercentChangeInCapacity
          • Scale out policy
            • 0 ≤ x < 10: no change
            • 10 ≤ x < 20: +10%
            • 20 ≤ x: +30%
          • Scale in policy
            • -10 ≤ x < 0: no change
            • -20 ≤ x < -10: -10%
            • x < -20: -30%
        • Instance warm-up
          • For step policy only
          • Set number of seconds that it takes for a newly launched instance to warm up
          • While it is warming up the newly launched instance is not counted towards the metrics of the ASG
      • Target tracking
        • Strongly recommended by AWS
        • Desired aggregate CPU = 40%
        • EC2 tries to maintain this metric value by launching/terminating instances
        • Metrics
          • SQS queue length (ApproximateNumberOfMessages)
            • But it doesn’t change proportionally to the size of the ASG
            • Better target tracking metric is Backlog Per Instance
              • Acceptable backlog per instance is 
                • Longest acceptable latency / average processing time
              • Current backlog per instance is 
                • Current SQS queue length / current size of ASG
              • Need to publish custom CloudWatch metric to do this
          • SQS oldest message (ApproximateAgeOfOldestMessage)
            • Useful when the application has time-sensitive messages and you need to ensure that messages are processed within a specific time period
      • If multiple dynamic scaling policies are used concurrently, autoscaler will choose the option that provides the greatest capacity
    • Scheduled scaling
      • Use if you have a predictable workload
    • Predictive scaling
      • Uses AI to determine when you’ll need to scale based on historical data
      • Every 24 hours it forecasts for the next 48 hours
      • You can override the forecast minimum/maximum capacity using a Scheduled Action
  • Cool down period
    • How long to wait after a scaling action completes
      • Default is 300 seconds (5 min)
    • Avoids thrashing
    • Can create cool down periods that apply to a specific scaling policy
    • Alternatively use a target tracking or step policy
  • Tips
    • Scale out aggressively
    • Scale back conservatively
    • Use Reserved instances for Min count instances
    • Cloudwatch is the tool to alert Autoscaling that you need more or less instances
  • Launch Templates
    • Versioned
      • Each version is immutable
    • Contents
      • AMI
      • EC2 instance type
      • Storage
      • Key Pair
      • Security groups
      • Optional network information
      • Optional user data
      • Optional IAM role
    • Provides newer features like Placement Groups, Capacity Reservations, etc/
    • Superset of Launch Configurations
    • Can be used for ASG or to launch one or more EC2 instances
  • Launch Configurations
    • The old-school approach – use Launch Templates
    • Immutable, to change you must create new LC 
    • For ASG only
  • Launch templates define What.  ASG defines When and Where.
    • When: Scaling policy
    • Where: VPC and AZs
  • Bake AMIs to shorten provisioning time
    • Start instance with an existing AMI
    • Install software
    • Create image from instance
  • Load Balancer integration
    • You can attach one or more Target Groups to your ASG to include instances behind an ELB
      • See rules on Attaching EC2 instances below
    • The ELBs must be in the same region
    • Once you do this any EC2 instance existing or added by the ASG will be automatically registered with the LB
  • Health checks
    • Enable terminating and replacing instances that fail health check
    • Types
      • EC2: based on state
        • Unhealthy: Any status other than Running
      • ELB: http health checks done by LB
      • Custom: Use API or CLI to tell Autoscaling an instance is Unhealthy
    • Health check grace period
      • Amount of time to wait after launching an instance before running health checks 
      • Default 300 sec (5 min) from console, zero from CLI/SDK
  • Poor man’s HA
    • Use a max=min=1 ASG aka Steady State Group with multiple AZs
    • If instance fails Autoscaling will recreate it
    • Instance will be re-provisioned in another AZ if AZ fails
  • Scaling processes
    • Can be set to SUSPEND to suspend the process or RESUME for the process to work normally
    • Use Standby if you want to temporarily stop sending traffic to an instance (see below)
    • Processes
      • Launch: launch instances per scaling policy
      • Terminate: terminate instances per scaling policy
      • AddToLoadBalancer: add to LB on launch
      • AlarmNotification: accept notifications from CloudWatch
      • AZRebalance: try to keep number of instances balanced across AZs
      • HealthCheck: run health checks
      • ReplaceUnhealthy: terminate unhealthy instances and replace
      • ScheduledActions: run scheduled actions (which?)
  • Standby
    • You can put an instance that is in the InService state into the Standby state
    • Instances that are on standby are still part of the Auto Scaling group, but they do not actively handle load balancer traffic
    • While in StandBy you can update or troubleshoot the instance
      • Including stop/start/reboot
    • Move the instance back to InService when finished troubleshooting
  • Detach
    • You can remove (detach) an instance from an Auto Scaling group
    • After the instance is detached, you can manage it independently from the rest of the ASG
    • You have the option of decrementing the desired capacity for the Auto Scaling group 
      • If you choose not to decrement the capacity, ASG launches new instances to replace the ones that you detach
  • Instance refresh
    • A way of replacing instances in an automated way
    • For example if the AMI or user-data of the instance needs to change
    • Steps
      • Create a new launch template with the changes
      • Configure
        • Minimum healthy percentage
          • 100 = replace one instance at a time
          • Zero = replace all instances at the same time
        • Instance warmup
          • Amount of time from when an instance comes into service to when it can receive traffic
        • Checkpoints
      • Start refresh
      • EC2 starts a rolling replacement
        • Take a set of instances out of service and terminate them
        • Launch set of instances with desired configuration
        • Wait for health checks and warmup
      • After a certain percentage is replaced a checkpoint is reached
        • Temporarily stop replacing
        • Send notification
        • Wait for amount of time
  • Controlling which instances terminate during scale in or instance refresh
    • By default it will
      • First try to keep the AZs balanced
      • Then within the AZ to terminate from
        • Terminate from the oldest launch template
        • Then terminate instance that is closest to the next billing hour
          • To try to maximize the usage for the billed hour
    • There are other pre-defined policies that you can choose from
    • Or you can create a lambda to implement a custom policy
  • Capacity rebalancing
    • Autoscaler will try to replace a spot instance that going to be terminated
  • Warm pools
    • A standby pool of instances that are already booted
    • Instances from this pool are grabbed during scale-out
    • Useful for instances that take a long time to boot
  • Maximum instance lifetime
    • Old instances are automatically replaced
  • Lifecycle hooks
    • Can run on launch or on terminate
      • EC2_INSTANCE_LAUNCHING
        • When hook is active, instance state moves from Pending to Pending:Wait
        • During hook wait state do custom initialization, etc
        • After CompleteLifecycleAction, instance state moves to InService
      • EC2_INSTANCE_TERMINATING
        • When hook is active, instance state moves from Terminating to Terminating:Wait
        • During hook wait state troubleshoot instance, send logs to CloudWatch, etc
        • After CompleteLifecycleAction, instance state moves to Terminated
    • Autoscaler waits until it gets a CompleteLifecycleAction API call to continue the launch/termination
      • Or until one hour (default) passes, whatever is shorter
      • Then either CONTINUE or ABANDON (launch only)
        • If ABANDON, the instance will be terminated and replaced
    • Two ways to handle hook
      • Use user-data/cloud-init/systemd/cron on the box
      • Eventbridge/SNS to Lambda/etc
  • Attaching EC2 instances
    • You can add a running instance to an ASG if the following conditions are met:
      • The instance is in a running state.
      • The AMI used to launch the instance still exists.
      • The instance is not part of another ASG
      • The instance is launched into one of the Availability Zones defined in your Auto Scaling group. 
    • The desired capacity of the ASG increases by the number of instances being attached
      • If the number of instances being attached plus the desired capacity exceeds the maximum size of the group, the request fails. 
  • Temporarily removing instances from Autoscaler
    • Put instance into Standby state
    • Install software or do whatever 
    • Put instance back into InService
    • Autoscaler may try to rebalance across AZs when instance is removed
      • You can temporarily disable rebalancing

Backup

  • Backs up EBS, EC2 instances, EFS, FSx Lustre, FSx Windows, RDS, DynamoDB, Storage Gateway volumes
  • Single pane of glass for backups of AWS and on-prem data
    • On-prem using Storage Gateway
  • Automated backup scheduling
  • Backup retention management
  • Backup monitoring and alerting
  • Use AWS Organizations to backup across accounts
  • Define lifecycle policies to move backups to cheaper storage over time
  • Ensures encryption
  • Audit Manager reports on compliance to policies
  • Backup Vault Lock
    • Write once, read many for backups
    • Protects backups against deletion

Batch

  • Batch processing system
  • Plans, schedules and executes batch processing workloads
  • Runs on top of ECS
    • EC2, EC2 Spot, Fargate, Fargate Spot
    • EC2 is required for jobs that require GPU, dedicated hosts or EFS
  • Batch chooses where to run the jobs, provisioning capacity as needed
    • When capacity is no longer needed the capacity is removed
  • Multi-node parallel jobs
    • Jobs that span multiple EC2 instances
    • Supports MPI, Tenserflow, Caffe2, Apache MXNet
    • Can be run on a Cluster Placement Group

Budgets

  • Easily plan and set expectations around cloud costs
  • Track ongoing expenditures
  • Set up alerts for when accounts are close to going over budget
    • Alert on current spend or projected spend
  • Budget types
    • Get two free each month
    • Reservation Budgets (for RIs)
    • Usage budgets
      • How much are we using
    • Cost budgets
      • How much are we spending
  • Use Cost Explorer to create fine-grained (tag based) budgets

Certificate Manager

  • aka ACM
  • It’s a free CA
  • Creates certificates
  • Integrates with Elastic Load Balancers, Cloudfront and API Gateway
    • Cannot use certs outside of those services
    • Automatic renewal of certs

Cloud Computing

  • Attributes
    • On-Demand Self-Service
    • Broad Network Access
    • Resource Pooling
    • Rapid Elasticity
    • Measured Service
  • Delivery Models
    • Public cloud 
      • AWS, Google Cloud, Azure
    • Multi-cloud
      • Working across two or more public clouds
    • Private cloud
      • AWS Outposts, Azure Stack, Google Anthos
      • On-prem but with all of the attributes of cloud
        • Not just on-prem legacy VMs plus AWS
    • Hybrid cloud
      • Public and private cloud together
      • Using the same tools to manage public and private 
        • For example, AWS + Outposts
  • Service Models
    • Terms and Concepts
      • Infrastructure stack
        • Application
        • Data
        • Runtime
        • Container
        • Operating System
        • Virtualization
        • Servers
        • Infrastructure
        • Facilities
      • Parts managed by you
      • Parts managed by vendor
    • Models
      • On-Premises
        • Everything managed by you
      • Data Center hosting
        • Facilities provided by vendor
      • IAAS
        • Facilities through Virtualization provided by vendor
        • You consume the Operating System
      • PAAS
        • Facilities through Container
        • You consume the runtime
      • SAAS
        • You consume the application
        • Everything else provided by vendor

CloudFormation

  • Infrastructure as Code
  • Code is called Templates
    • YAML or JSON
  • Sections
    • AWSTemplateFormatVersion
      • Optional, but must come first if present
    • Description
    • Metadata
      • Controls the layout of template data in the AWS Console user interface
    • Parameters
      • Single-valued
      • Prompts the user to enter information in the AWS Console when template is applied
    • Mappings
      • Map of parameters keyed by eg. region
      • Used for creating lookup tables
    • Conditions
      • Booleans set based on parameters and environment
      • Allows resources to be conditionally defined
    • Resources
      • Only mandatory section
      • Called “logical resources”
      • Has a type and zero or more properties
    • Outputs
      • Output variables displayed after template is applied
  • Stack
    • A instance of a template with physical resources created from the logical resources 
    • Stacks can be 
      • Created: new template is applied
      • Updated: template is changed and applied
      • Deleted: template is deleted
    • A stack is created/updated transactionally – either all of the resources in the stack are created/updated or the entire change is rolled back
  • Can use parameter store
  • Aim for immutable architecture
  • Use Creation Policy attribute when you want to wait for resource to be fully created before moving on
  • A StackSet allows you to create stacks across accounts and regions
  • CloudFormation::Init
    • cfn-init helper script installed on EC2 OS
    • Simple configuration management system
    • Can work on stack updates as well as stack creates
      • Better than user data that runs just once
    • Desired state
      • Packages
      • Groups
      • Users
      • Files
      • Runs commands
      • Manages services
    • In CFN template
      • Ec2Instance:
        • CreationPolicy:
          • ResourceSignal:
            • Count: 1
            • Timeout: PT5M
        • Metadata:
          • AWS::CloudFormation::Init:
            • wordpress_install:
        • UserData:
          • Fn::Base64: !Sub |
            • #!/bin/bash -xe
            • /opt/aws/bin/cfn-init -v –stack ${AWS::StackId} –resource Ec2Instance – -configsets wordpress_install ….
            • /opt/aws/bin/cfn-signal -e $? –resource EC2Instance …
    • Creation policy
      • Waits for a signal from resource with success/failure
        • Call to /opt/aws/bin/cfn-signal helper app
      • Doesn’t let resource creation succeed until all inits complete or timeout
      • The only CloudFormation resources that support creation policies are 
        • AWS::AutoScaling::AutoScalingGroup
        • AWS::EC2::Instance
        • AWS::CloudFormation::WaitCondition.

CloudFront

  • CDN
    • Lives at edge
    • Speeds up access to content for users on the edge
    • Also works for on-prem origins
  • 20 GB max object size
  • Origins
    • Source location of your content
    • Any HTTP/S with public IPv4 address
    • Types
      • S3
        • Except S3 static website hosting
        • Supports Origin Access Identity (OAI)
        • Viewer protocol is used for origin
      • Custom
        • Any other HTTP server 
          • S3 with static website hosting enabled
          • ELB
          • Web servers running on EC2
          • AWS Elemental MediaPackage / MediaStore
        • Port
        • Custom headers
          • Can use to verify request is from Cloudfront
        • Minimum origin SSL protocol
        • Origin protocol policy
          • HTTP, HTTPS, match viewer protocol
    • Origin Groups
      • For resilience
      • Primary and secondary origins, secondary for failover
    • Origin Failover
      • Failover happens on a per-request basis
      • Only for GET, HEAD and OPTIONS
      • When
        • Specific status codes returned
        • Cannot connect to origin
        • Timeout
      • After trying
        • By default 3 times with 10 second timeout = 30 seconds
      • To the next origin in the Origin Group
  • Distribution
    • Unit of configuration
    • Has a URL like http://d111111abdcef8.cloudfront.net
      • Can use a CNAME or alias for your real domain name
    • Uses one or more origins
    • Contains one or more Behaviors
      • Each Behavior has
        • Path match pattern
        • Origin / origin group
        • Viewer protocol policy
          • Eg. Redirect to HTTPS
        • Allowed HTTP methods
        • Cached HTTP methods
        • Restricted Viewer Access
          • Private Content
          • Use signed URLs or signed cookies
          • Trusted Signers
        • Cache controls
        • Compression
        • TTL
          • Default: 24hrs
          • Minimum / Maximum TTL
            • Limits Cache-Control/Expires headers on objects
              • Set in S3 as object metadata
        • URL query parameters forwarding
        • Cookie forwarding
        • Lambda@Edge association
        • Precedence
          • Priority order compared to other Behaviors
      • Default Behavior has path *
        • Has a Precedence of 0 (lowest)
    • Price class
      • Use all edge locations
      • Use US, Canada, Europe, Asia, Middle East and Africa
      • Use only US, Canada and Europe
    • SSL certificate 
      • See HTTPS below
    • Security policy
      • TLS version
    • HTTP version
    • Can use a WAF ACL
    • Default root (url) object
      • Object at origin to use if requesting /  (eg. index.html)
    • You can block individual countries using geo-restrictions
  • Edge Location
    • Local infrastructure that hosts a cache
    • 90% storage
  • Regional Edge Cache
    • Sits in between Origin and Edge Location
    • Serves multiple Edge Locations
    • Edge Location first checks Regional Edge Cache
    • Currently not used for S3 origins
  • HTTPS
    • Cert for default cloudfront.net domain name has CN=*.cloudfront.net
    • Integrates with Certificate Manager for custom domain names
      • ACM Certificate must be created or imported in us-east-1
        • Same for any global AWS service
    • Certificate must be valid cert signed by browser-trusted CA (not self)
    • Choose SNI (Server Name Indication) or use dedicated static IP addresses 
      • SNI should work in most cases
      • Only need dedicated static IPs to support ancient browsers
        • IE6, IE7 on XP, Android 2.3, IOS Safari 3
        • SNI was standardized in 2003
        • $600 per month per distribution for static IP
    • Can add HTTPS for S3 static websites
    • Can require HTTPS by browsers or redirect to HTTPS
      • Viewer Protocol
    • Can optionally require HTTPS from edge to origin (though not S3 origins)
      • If ELB, can use an ACM cert
      • If custom origin must use a valid cert signed by browser-trusted CA (not self)
  • Field level encryption
    • You can specify a set of (up to 10) POST form fields
    • CloudFront encrypts those fields at the edge
    • Using a public key that you store with the distribution
    • They fields stay encrypted all the way to the application
    • The application can decrypt using the private key
  • Custom error responses
    • You can specify an object to use for one or more HTTP status codes
    • Client errors
      • 400, 403, 404, 405, 414, 416
    • Server errors
      • 500, 501, 502, 503, 504
      • If Cloudfront doesn’t get a response from the origin within a timeout it converts that into a 504 (Gateway timeout) status
    • For some 503 errors a custom error page will not be returned
      • Capacity Exceeded / Limit Exceeded
    • If an object is expired but not yet evicted and the origin starts returning 5xx, Cloudfront will continue to return the object
    • It is recommended to use a different origin (eg. S3) for your error pages
      • Otherwise a 5xx could get turned into a 404 because the error page can’t be found
    • You can translate status codes
      • For example, always return 200 status code with custom error pages for 5xx errors
    • Caching errors
      • By default errors will be cached for 10 seconds
      • You can configure it per status code
      • You can also set it per object with cache-control headers
  • Invalidations
    • Performed on a distribution
    • Should be thought of as a way to fix errors, not as an application update mechanism
      • Use versioned filenames instead
    • Submit a path to invalidate
      • Can be single object or use wildcards
      • Use the console or API
      • First 1,000 invalidation paths a month are free per account
  • Private Content
    • “Restrict viewer access” setting
    • Uses trusted key group
      • Group of public keys associated with distribution
    • Signed URLs
      • For individual files (eg. Installation downloads)
      • Or for clients that don’t support cookies
    • Signed cookies
      • Access to multiple restricted files
      • Or if you don’t want to change URLs
  • Origin Access Identity (OAI)
    • OAI: Token created by Cloudfront
    • OAI only used when accessing bucket through Cloudfront
      • Typically one OAI per Cloudfront distribution used by many buckets
    • To change a bucket so it is only accessible via Cloudfront
      • Set bucket policy with only one ALLOW for OAI
    • Typically used to ensure no direct access to buckets with using private CF distributions (signed URLs/cookies)
  • DDoS protection using AWS Shield
  • Lambda@Edge
    • Lightweight lambdas running on Cloudfront servers
    • Adjust data between viewer and origin
      • Like an interceptor
    • Node.js and Python only currently
    • Layers are not supported
    • Runs in AWS public space (not in VPC)
    • Different limits vs. regular Lambda
      • Function duration
        • 5 seconds: Viewer Request / Response
        • 30 seconds: Origin Request / Response
      • Maximum memory usage
        • 128 MB
      • Max size of code + libraries
        • 1 MB: Viewer Request / Response
        • 50 MB: Origin Request / Response
    • Where Lambda@Edge can run
      • After Viewer Request
      • Before Origin Request
      • After Origin Response
      • Before Viewer Response
    • Use cases
      • A/B testing – Viewer Request
      • Migration between S3 origins – Origin Request
      • Different objects based on device – Origin Request
      • Authentication at edge – Viewer Request
        • See below
    • Authentication at edge using Lambda@Edge
      • User tries to access /private/*
      • If the user is not authenticated they are redirected to Cognito to login
      • Cognito redirects with a JWT in the URL
      • The web browser extracts the JWT from the URL and makes a request to /private/* with an Authorization header with the JWT
      • Lambda@Edge decodes the JWT, verifies the user and the signature on the JWT using the Cognito public key
      • If everything looks good the request is allowed to pass
        • The Authorization header is stripped
      • Origin Access Identity can be used by S3 or ELB to further verify the request

Cloud Map

  • Service discovery service
  • Integrates with ELB, ECS, EKS for auto registration
  • Also has an API for registration
  • Integrates with Route53 for health checking and service publishing to DNS
  • Also has an API for more complex querying and non-IP-address targets (ARNs or URLs)

Cloudtrail

  • Logs all API calls made from your account
    • Each API call is a CloudTrail Event
  • 90 days stored by default for free
    • You can also create one Trail in each region for Management events for free
  • To customize create one or more Trails
    • Trail logs can be stored in S3 and/or Cloudwatch Logs
      • Cloudwatch Logs much better for querying
    • By default only logs Management Events
      • Control plane actions
    • Data plane actions are too numerous to log by default
      • Costs extra if enabled
        • $0.10 per 100,000 events
      • Can select Read events, Write events or Both
      • Examples
        • S3 object-level API (GetObject, DeleteObject, PutObject)
        • Lambda function execution (the Invoke API)
        • DynamoDB object API (PutItem, DeleteItem, UpdateItem)
  • Cloudtrail is regional
    • But can create an “All Region” trail
    • Global services like IAM, STS, Cloudfront are either logged to
      • The region the event was generated in
      • us-east-1
      • Global services event logging must be enabled on a trail
  • Not real-time
    • Generally 15 minute delay
  • Organization Trail
    • Created from Management account
    • Captures events from all accounts in the organization
    • Trail will be created in every account with the name of the organization trail
      • Users in member accounts will be able to view the trail but not change or delete it
  • Encrypted by default with SSE
  • Validating CloudTrail log file integrity
    • When enabled, CloudTrail creates a SHA-256 hash for every log file
    • Every hour a digest file is created including the log files and hashes
    • The digest is signed with the private key part of a key pair
    • The digest can be validated with the public key using the AWS CLI
    • Digests are put into a separate directory of the same bucket containing the log files

Cloudwatch

  • Metrics on AWS services, your applications in AWS or on-prem
  • Standard metrics delivered every five minutes (free)
    • Detailed is one minute ($)
  • Namespace
    • Container for related metrics
    • AWS namespaces look like
      • AWS/EC2
  • Datapoint
    • A single measurement for a metric
    • Timestamp + Value + Dimensions
    • Dimensions
      • Key / Value pairs, for example
        • InstanceId=i-12345abc
        • InstanceType=t3.small
  • AWS cannot see past the hypervisor
    • By default only the following metrics are available without an agent installed
      • CPU utilization
      • Disk read/write ops/bytes
      • Network in/out packets/bytes
    • To get system-level metrics for EC2 instances, use Cloudwatch Logs Agent
      • See Cloudwatch Logs below
  • Alarms
    • Based on metric thresholds, move to OK or ALARM state
    • Depending on state do action, for example
      • Send SNS notification
      • Send event
      • Trigger autoscaling
      • Stop, terminate, reboot, or recover an EC2 instance
        • Need to create service-linked role so Cloudwatch can do the action
    • No default alarms

Cloudwatch Events

  • See EventBridge

Cloudwatch Logs

  • Regional service
  • Not real-time
  • All logs should either go to Cloudwatch or S3
  • Log Event == one log record
    • Timestamp + message
  • Log Stream == all log events from a single source
    • For example, for /var/log/messages from specific host
  • Log Group == group of log streams
    • Configuration settings stored here, like retention settings or metric filters
  • Metric Filter Pattern
    • Look for certain terms and create metrics and potentially alarms
  • Cloudwatch Logs Insights
    • Use interactive queries against logs
    • Purpose-built query language with simple commands
    • Automatically discovers fields from logs from AWS services
    • One request can query up to 20 log groups
    • You can save queries to run later
  • Install Cloudwatch Logs Agent on servers to ship logs to AWS
    • Can be EC2 or on-prem
    • Binary installed on instances 
    • Support for all operating systems
    • Agent config wizard will store config in Parameter Store by default
    • Attach instance profile with access to Cloudwatch Logs
      • And Parameter Store if used
    • Create a log group per log file
  • Some services act as a source for Cloudwatch logs
    • EC2, VPC Flow Logs, Lambda, Cloudtrail, Route53, etc.
  • Can also use AWS SDK to log directly from applications

CodeDeploy

  • Automate application deployment to EC2, ECS, Lambda or on-premises
  • Deploy code, lambda functions, web artifacts, config files, executables, scripts, etc
  • Deploy from S3, source control systems
  • Integrates with CI/CD
  • Supports
    • In-place rolling updates (EC2 only)
    • Blue-Green with optional Canary
    • Automated or manual rollbacks

Cognito

  • Authentication, authorization and user management for web/mobile apps
  • Supports nearly an unlimited number of users
  • User pools
    • User database for your web/mobile app
    • Sign in and get a JSON web token (JWT)
      • Customizable web UIs for login, registration, forgot password
      • Also supports social sign-in via Facebook, Google, etc + OIDC + SAML
    • MFA and other security features
      • Adaptive authentication to predict when you might need another authentication factor
    • These are app-specific users only – no relation to AWS IAM identities
    • API gateway can accept JWTs to trigger Lambdas
  • Identity pools
    • Swap external identity for temporary, limited-privilege AWS credentials
    • Define roles in Identity pool for the access required
    • Web Federated identities using 
      • Web identity provider (Google, Facebook, etc)
      • OAuth/OIDC (Okta, etc)
      • SAML (Active directory, Okta, etc)
      • Cognito User Pools
      • Unauthenticated guest users
  • To simplify, use only a User Pool as the provider to Identity Pool
    • Federation can happen on the User Pool side

Config

  • Used to define and enforce standards
  • Can set up rules for how resources are configured
  • Can tell you who changed what and when
    • Links to CloudTrail logs
  • Doesn’t prevent changes from happening
  • Regional service
  • Supports cross-region and cross-account aggregation
  • Once enabled the configuration of all supported resources is constantly tracked
    • Every time a change occurs to a resource a Configuration Item (CI) is created
      • CI represents the configuration of a resource at a point in time
        • And its relationships
    • All CIs for a given resource is called a Configuration History
      • Stored in S3 bucket – the Config Bucket
  • Config Rules
    • Resources are evaluated against Config Rules
      • Either AWS managed or custom (using Lambda)
      • Resources are compliant or non-compliant
    • Can automate remediation using 
      • Changes can generate 
        • SNS notifications
        • Near-realtime events via EventBridge and Lambda
      • Systems Manager automation documents (Runbooks)
        • Mostly for EC2 instances
  • Can track deleted resources

Cost Explorer

  • Run reports/visualization on what different resources are costing
  • Use tags as the primary way to group resources
  • Can create budgets for AWS Budgets
  • Can forecast spending for upcoming month
  • Rightsizing recommendations
    • Identifies cost-saving opportunities by downsizing or terminating instances

Database Migration Service

  • Managed service for migrating database into or out of AWS
    • With no downtime
  • Uses a replication EC2 instance
  • Source and Destination endpoints
    • Connection information for databases
    • One endpoint must be on AWS
  • Jobs can be 
    • Full load
      • One-off migration of all data
    • Full load + Change Data Capture (CDC)
      • Full migration + ongoing replication
    • CDC only
      • For example, if vendor tool is better for doing initial full migration
  • Schema Conversion Tool (SCT)
    • Assists with migration
    • Can migrate stored procedures and embedded SQL in an application
      • Will automatically convert as much as possible, highlighting any manual changes that need to be made
  • Once a schema has been created on an empty target, depending on the volume of data and/or DB engines, either DMS or SCT are then used to move the data. 
    • DMS traditionally moves smaller relational workloads (<10 TB) and MongoDB
    • SCT is primarily used to migrate larger, more complex databases like data warehouses
    • DMS supports ongoing replication to keep the target in sync with the source; SCT does not
  • Snowball Edge+S3 can be used as a transport mechanism
    • Use SCT to extract the data locally and move it to an Edge device.
    • Ship the Edge device or devices back to AWS
    • Snowball Edge automatically loads data into an S3 bucket
    • DMS takes the files and migrates the data to the target data store
      • If you are using change data capture (CDC), those updates are written to the S3 bucket and then applied to the target data store

Data Lifecycle Manager

  • Automates EBS snapshots and manages snapshots

DataSync

  • Data transfer to and from AWS
  • Use cases
    • One time migrations
    • Data processing transfers
    • Archival
    • Disaster Recovery
  • Designed to work at huge scale
    • 10 Gbps per agent (~100 TB per day)
    • Can run multiple agents if you have the bandwidth
      • AWS side will auto-scale
  • Optional bandwidth limiters to avoid link saturation
  • Keeps metadata (permissions/timestamps)
  • Build in data validation
  • Incremental and scheduled transfer options
  • Task
    • “Job” that defines what is being synced, how quickly, from/to
  • Compressed in transit
  • AWS Service integration
    • S3, FSx Windows, EFS
      • Supports transferring directly into 
        • S3 Standard, IA, One-Zone IA, Intelligent Tiering, Glacier, Deep Archive
    • Supports VPC endpoints
      • To sync over Direct Connect or VPN and avoid the public internet
    • Can sync between services / across regions
  • Encryption
    • Encryption in transit
    • Supports SSE-S3 for S3 buckets 
    • Supports EFS encryption at rest
  • Automatic recovery from transit errors
  • Include/Exclude filters
  • Agent installed on-prem as a VM
  • Can integrate with almost all on-prem storage (SMB, NFS, S3 API object storage)
  • Most cost-effective way to migrate data to AWS
    • Pay per GB moved

Data Transfer

  • No charge
    • Inbound data transfer from internet, Direct Connect or VPN
    • Transfer beween Als
  • Charged
    • Between regions
    • To internet (per service per region)
    • VPC peering across AZs 
    • Into Transit Gateway
    • From Direct Connect or VPN to customer

Disaster Recovery 

  • Scenarios for using cloud-based DR
    • Backup and restore
    • Pilot light
      • A small part of the DR infrastructure is always running syncing mutable data
    • Warm standby
      • A scaled-down version of a fully-functioning environment is always running
    • Multi-site
      • Active-active configuration with one part on-prem and the other in AWS

Direct Connect (DX)

  • Directly connect your datacenter to AWS
    • Dedicated network port into AWS 
      • A physical ethernet connection 
      • Single mode fiber optic cable cross-connect to customer DX router at DX location
      • Getting the fiber extended from the DX location to your on-prem router could take weeks or months
      • 1, 10, 100 Gbps
        • 1000-Base-LX fot 1 Gbps
        • 10GBASELR for 10 Gbps
        • 100GBASE-LR4 for 100 Gbps
    • Hosted connection
      • Physical ethernet connection to a AWS Partner
      • 50 Mbps to 10 Gbps
  • Useful for high-throughput workloads
  • Helpful when you need a stable and reliable connection without traversing public internet
  • Data from on-prem to DX and to VPC is not encrypted
    • Can use AWS VPC VPN to do this over public VIF
  • Can aggregate up to 4 Direct Connect ports using Link Aggregation Groups (LAG)
    • 40 Gbps with aggregation of 10 Gbps connections
  • Supports IPv4 or dual-stack IPv4/IPv6
  • Doesn’t use business bandwidth
  • Router must support VLAN and BGP
  • VIF – Virtual Interface
    • Multiple VIFs per DX
      • A VIF is a  VLAN and BGP session
        • Cannot extend on-prem VLANs into AWS
    • Private VIF
      • For connecting to a VPC either
        • Via a Virtual Private Gateway (VGW) in the VPC
          • Can connect a VIF to multiple VGWs/VPCs
          • Must be in the same region and same account
        • Via Direct Connect Gateway (DGW)
          • DGW connects to one or more VGWs/VPCs or Transit Gateways
          • See below
    • Transit VIF
      • To connect to Transit Gateway(s) via a Direct Connect Gateway
      • Can connect to Transit Gateways in multiple regions
    • Public  VIF 
      • For public services like S3, DynamoDB, SNS, SQS
      • Also for VPN access to VPCs
  • Direct Connect gateway
    • Globally available resource
      • Can associate with any region except China
    • Can associate with one or more VGWs/VPCs or Transit Gateways
    • Using VGWs
      • No direct communication between the VPCs that are associated with a single Direct Connect gateway
      • Hard limit of 10 VGWs per DGW
      • Can associate with VGWs in different accounts
        • Other account proposes association and DCGW approves
    • Transit gateway associations
      • TGWs are per region and connect multiple VPCs within that region
      • Can connect to multiple TGWs in multiple regions
        • Hard limit of 3 TGWs per DCGW
      • Can connect multiple VPCs in a single association
      • Advertise prefixes from on-prem to AWS and vice-versa
  • Resiliency
    • Basic setup has no resilience
      • AWS region is connected to multiple DX locations via redundant connections
      • Single cross-connect cable between AWS DX router and your DX router at the DX location
      • Single fiber connection extended from DX location to on-prem router
      • SPOFs
        • DX location
        • DX router
        • Cross-connect 
        • Customer DX router
        • Extension fiber to on-prem
        • Customer router
        • Customer premises
    • Improved resilience
      • Multiple cross-connects into multiple AWS DX router ports
      • Multiple fiber extensions to customer premises
      • SPOFs
        • DX location
        • Customer premises
        • Potentially fiber cable path for extension
          • For example, road work could cut off both extensions
    • Even better
      • Multiple DX locations
      • Multiple customer premises
    • Best
      • Multiple routers at each location
        • Each DX location has two AWS DX routers and customer DX routers
        • Each customer premises has two routers
        • Four fiber extensions

Directory Service

  • Family of managed services
    • Managed Microsoft AD
      • Real Microsoft AD 2012
      • Lives in your VPC
        • Data lives in VPC as well
      • Deploy into multiple AZs for HA
      • Extend to on-premise AD using AD trust
        • Needs to happen over Direct Connect or VPN
        • Resilient if VPN fails
          • Services in AWS will still run
      • Use if application must have MS AD Domain Services or AD Trust
    • Simple AD
      • Standalone managed directory
      • Basic AD features
        • Based on Samba 4
      • Deploy into multiple AZs for HA
      • For 500 (Small mode) to 5000 (Large mode) users
      • Makes Windows EC2 easier
      • Amazon Workspaces can use it
      • Can also use for Linux LDAP
      • Cannot use for on-prem
        • Cannot support trusts
      • Should be default option
    • AD Connector
      • Proxy to on-prem AD
      • Join EC2 instances to existing AD domain
      • Can scale across multiple AD connectors
      • Requires VPN or Direct Connect
      • No caching – if VPN fails, AWS side won’t have AD
      • Use if you don’t want to store any directory data in AWS
    • Cloud Directory
      • Fully managed
      • Intended for developers
      • Hierarchical database supporting hundreds of millions of objects
      • Use cases: org charts, course catalogs, device registries
    • Cognito user pools
  • Use existing corporate credentials to login using AWS SSO
  • SSO for any domain-joined EC2 instance

DNS

  • DNS root zone contains the nameservers for the TLDs
    • Root zone is managed by IANA
    • 13 root servers
      • Operated by 12 large organizations
      • List hardcoded in resolvers: “root hints file”
  • TLD zone contains the delegation details for the domains in that TLD
    • Managed by a Registry
    • gTLD: generic top level domain (.com, .org)
    • ccTLD: country code top level domain (.uk, .de)
  • Registrar 
    • Handles domain registration
    • Has relationships with all TLD Registries
    • Adds NS records in the TLD zone
  • Chain of trust
    • Root hints file
    • Root zone
    • TLD zone
    • Authoritative nameserver for specific domain zone

DynamoDB

  • NoSQL, not relational
  • Public service
  • Wide-column, key-value and document
  • Access via console, CLI, API – no query language like SQL
  • No self-managed servers
  • Billing types
    • Manual / Automatic provisioned performance
      • Explicitly set capacity values per table
        • 1 Write Capacity Unit (WCU) = 
          • 1 x 1KB per second for regular writes
          • 0.5 x 1KB per second for transactional writes
        • 1 Read Capacity Unit (RCU) = 
          • 1 x 4KB per second for strongly consistent reads
          • 2 x 4KB per second for eventually consistent reads
          • 0.5 x 4KB per second for transactional reads
    • On Demand
      • Pay per million RCU or WCU + storage
      • For unknown / unpredictable workloads
        • Or super-low administration
      • Five times more expensive than provisioned
    • Every table has a RCU and WCU burst pool (300 seconds)
      • You will get an exception if your burst pool is exhausted
    • For predictable usage patterns use Provisioned Capacity
      • Otherwise use On Demand
      • You can switch between only once every 24 hours
  • Resilient across 3 AZs 
    • And optionally global
  • Stored on SSD storage
    • Single-digit millisecond access
  • Event-driven integration
    • Do thing when data changes
  • Table
    • Zero or more items (rows)
      • No limit to the number of attributes in an item
        • No rigid schema
          • Attributes are key/value pairs in an item
        • 400KB max item size
          • Including the attribute names
    • Primary key per item
      • Simple: Partition key
      • Composite: Partition and sort keys
  • Reading and Writing operations
    • Query
      • Uses a single partition key and an optional sort key or range
      • Capacity consumed is the total size of all items returned
        • Even if you filter the results on non-PK attributes or only use a subset of attributes in each returned item
      • Keeping item size small is key for cost
    • Scan
      • Reads every item of an entire table
      • Capacity consumed is the total size of all items in the table
        • Even if you filter the query results received by the application select a subset of attributes
    • WCU / RCU Calculations
      • Consistency factor for operations
        • 0.5 for transactional 
        • 1 for strongly consistent 
        • 2 for eventually consistent
      • Write
        • Need to write 10 items per second, 2.5KB average size
        • WCU per item = ceil(size / 1KB) = ceil(2.5) = 3
        • Total WCU consumed = item size * consistency factor * rate =
          • 3 * 1 * 10 = 30
      • Read
        • Same except use applicable consistency factor
  • Eventually consistent reads by default
    • Consistency across all replicas usually reached within one second
    • Can turn on strong consistency per read
      • Strong consistency reads go to leader
  • Transactions
    • Another option for reads and writes
    • Atomic changes across multiple rows/tables
      • In a single account and region
  • On demand backup
    • Full backups at any time
    • Zero impact on performance or availability
    • Consistent within seconds
    • Retained until deleted
    • Backup lives in same region as source table
    • Restore
      • Same or cross-region
      • With or without indexes
      • Adjust encryption settings
  • Point in time recovery
    • Protects against accidental changes
    • Not enabled by default
      • Need to enable Continuous Backups
        • Continuous stream of changes allows replay to any point 
    • Can restore from any point between 5 minutes ago and 35 days ago
      • Depending on your automated backup retention
    • Creates a new database
    • Choose the Default VPC security group or apply a custom security group
    • Choose the default DB parameter and option groups or apply a custom parameter group and option group
  • Streams
    • Time ordered sequence of item-level changes in a table
    • Similar to Change Data Capture
    • Encrypted at rest
    • Writes changes in near real time
    • Different view types influence what change data goes in the stream
      • KEYS_ONLY
      • NEW_IMAGE
      • OLD_IMAGE
      • NEW_AND_OLD_IMAGES
    • Combine with Lambda for trigger-like functionality
      • Uses Lambda Event-Source mapping
      • Use cases
        • Reporting / analytics
        • Aggregation
        • Messaging
        • Notifications
    • Two different ways of using
      • DynamoDB Streams
        • 24 hour data retention
        • Max 2 consumers per shared
        • Throughput quotas in effect
        • Access methods
          • Pull mode over HTTP using GetRecords API
          • Lambda
          • DynamoDB Streams Kinesis Adapter
        • Records in order of changes
        • No duplicates
      • Kinesis Data Streams for DynamoDB
        • Up to 1 year data retention
        • Max 5 consumers per shard or 20 using enhanced fan-out
        • No throughput quotas
        • Access methods
          • Pull mode over HTTP using GetRecords API
          • Push mode over HTTP/2 using SubscribeToShard API (requires enhanced fan-out)
          • Any other Kinesis access method
        • Use timestamp for ordering
        • Duplicates may appear
  • Indexes
    • Local Secondary Indexes (LSI)
      • Different sort key
      • Must be created at the same time as the base table
      • Max 5 LSIs per base table
      • Shares the RCU/WCU of the base table (for provisioned capacity tables)
        • Every partition of a local secondary index is scoped to a base table partition that has the same partition key value. 
      • If you query a LSI you can request attributes that are not projected into the index
        • DynamoDB automatically fetches those attributes from the base table
        • Base table sort key is projected into each LSI item
    • Global Secondary Indexes (GSI)
      • Different partition and sort keys
      • Can be created at any time
      • Max 20 GSIs per base table
      • Are always eventually consistent
      • Have their own RCU/WCU allocations
        • Stored in its own partition space away from the base table and scales separately from the base table
    • Use GSIs as default, LSI only when strong consistency is needed
    • Projection attributes in index table
      • ALL
      • KEYS_ONLY
      • INCLUDE (subset of attributes)
    • Indexes are sparse
      • Index tables only contain items where the alternative key is present in the base table
  • Global Tables
    • Managed multi-master, multi-region replication
    • For globally distributed applications
    • Can read and write to any region
    • Uses last-writer-wins for conflict resolution
    • Strongly consistent reads within a single region only
    • Creating a global table
      • Create a table in each region
      • Create the global table configuration
    • Based on DynamoDB streams
    • For disaster recovery and HA
    • No application rewrites
    • Replication latency under one second
  • Autoscaling
    • Can automatically scale read/write/both capacity 
      • For table or global secondary index
    • Based on Cloudwatch metrics and alarms
    • Uses target tracking
  • DAX
    • DynamoDB Accelerator
    • Cache in front of DynamoDB
    • Microsecond cache hits
      • Millisecond cache misses
    • For eventually consistent reads only
    • Read/write through cache
    • Use the DAX SDK in the application
      • Supports Go, Java, Node.js, Python, and .NET only
        • Not Javascript in the browser
    • Lives in a VPC
      • Deploy into multiple AZs
      • Primary node in one AZ, replicas in others
        • Write to primary, read from any
        • If primary fails election is held to promote replica 
    • Caches
      • Item cache
      • Query cache
    • You determine 
      • Node size and count
        • Scale up or out
      • TTL for data
      • Maintenance windows

Elastic Block Storage – EBS

  • Resilient within one AZ
  • You can increase the size of a volume on the fly
  • You can change the volume type or adjust performance
    • May take up to 24 hours
  • Billed per GB-month
    • You are still billed for EBS if the associated instance is stopped
  • The default is to delete EBS Volume on termination
    • Can opt out
  • Stopping instance will not lose EBS data
    • Unlike ephemeral volumes (instance store)
  • SSD Volumes
    • Supported as boot volumes
    • gp2:  General purpose
      • 1 GB – 16 TB
      • Up to 16,000 IOPS per volume (depending on volume size)
        • 3 IOPS per GB
        • Minimum of 100 IOPS
      • Up to 128-250 MB/s throughput (depending on volume size)
      • 99.9% durability
      • Max burstable IOPS: 3000
        • You can burst up to max approximately 10% of the time
        • At 100GB you could burst for 30 minutes but it would take 5 hours to refill the bucket
      • Best bet is to over provision to >= 1TB no matter how much space you need
    • gp3:  General purpose
      • 1 GB – 16 TB
      • 3,000 IOPS and 125 MB/s standard
      • Extra cost for up to 16,000 IOPS or 1,000 MB/s
      • 99.9% durability
    • io1/io2: Provisioned IOPS
      • For high performance applications
      • Very expensive
      • 4 GB – 16 TB
      • Up to 32,000 (or 64,000 for Nitro) IOPS max
        • 50:1 IOPS:GB (io1)
        • 500:1 IOPS:GB (io2)
      • Up to 1,000 MB/s throughput
      • Per instance max performance
        • Need multiple volumes per instance to get the total (ie RAID0)
        • 260,000 IOPS and 7,500 MB/s (io1)
        • 160,000 IOPS and 4,750 MB/s (io2)
      • 99.9% durability
    • io2 Block Express
      • Sub-millisecond latency
      • 4GB – 64 TB
      • Up to 256,000 IOPS
        • 1000:1 IOPS:GB
      • Up to 4,000 MB/s throughput
      • Per instance max performance
        • Need multiple volumes per instance to get the total (ie RAID0)
        • 260,000 IOPS and 7,500 MB/s 
      • 99.999% durability
  • HDD Volumes
    • st1: Throughput optimized
      • Big data / Data warehouse / Log processing
      • 125 GB – 16 TB
      • 500 IOPS max
      • 500 MB/s throughput max
      • 40MB/s/TB base
      • 250MB/s/TB burst
      • 99.9% durability
    • sc1: Cold HDD
      • Infrequently accessed throughput-oriented
      • Where lowest cost is most important
      • 125 GB – 16 TB
      • 250 IOPS max
      • 250 MB/s throughput max
      • 12MB/s/TB base
      • 80MB/s/TB burst
      • 99.9% durability
  • Snapshots
    • Incremental point-in-time
      • First snap is full / slow
    • Stored in S3
      • Regionally resilient vs volume which is AZ resilient
    • For best consistency stop the instance first
    • You can share between accounts
    • Can copy snapshot to a different region
    • Can create volume from a snapshot in a different region
    • Only billed for the data used (GB-month)
      • Specifically for the incremental snapshots you are only charged for the incremental new/changed amount of data
    • Deleting a snapshot might not reduce your organization’s data storage costs. 
      • Later snapshots might reference that snapshot’s data, and referenced data is always preserved. 
      • If you delete a snapshot containing data being used by a later snapshot, data and costs associated with the referenced data are allocated to the later snapshot. 
  • Fast snapshot restore
    • Normally EBS volumes populate data lazily from snapshot restores
    • Fast snapshot restore is an option that you can select when making the snapshot
    • When EBS Volume created from Fast snapshot restore it is fully populated
    • Up to 50 snaps per region
    • $540 a month per snapshot in us-east-1
    • You can force a full restore using dd to read every block
  • Encrypted volumes
    • Data at rest, in transit encrypted
    • Uses AWS KMS keys
      • Either AWS managed (aws/ebs) or CMK
        • AWS automatically creates a managed key per region
      • You have a default EBS key per region
      • Data encryption key (DEK) generated, encrypted, and stored with volume
      • When instance started DEK decrypted and stored in hypervisor memory
        • DEK is then used to encrypt written data / decrypt read data
          • Uses AES-256
        • Operating system is not involved in the encryption
          • Done by hypervisor in hardware (Nitro card)
          • No instance CPU used – no performance hit
            • “Slight increase of latency”
    • When you create a snapshot
      • If the volume is unencrypted
        • You can choose to encrypt it 
          • Unless Encryption by Default is on then it must be encrypted
      • If the volume is encrypted
        • Then the snapshot will also be encrypted
          • Uses the same key as the volume by default
    • Volumes created from encrypted snapshot are encrypted
      • Uses the same key as the snapshot by default
    • Can copy encrypted snapshot to a encrypted snapshot with a different key
      • Doubles the storage used
    • To encrypt an unencrypted volume
      • Create a snapshot
      • Copy the snapshot to an encrypted snapshot
      • Restore a new volume from the encrypted snapshot
    • To switch an instance to use encrypted volume
      • Create a snapshot of unencrypted volume
      • Create a copy of snapshot with encryption enabled
      • Create AMI from encrypted snapshot
      • Use that AMI to launch new encrypted instance
    • Can turn on Encryption by Default for an account in a specific region
      • Can set default key to use
        • Can override per new volume / snapshot
      • Snapshots created from unencrypted volumes will be required to be encrypted
      • New volumes restored from unencrypted snapshots will be required to be encrypted
      • Each volume uses a unique DEK
      • Instance type must support EBS encryption
  • Multi-attach
    • Attach a provisioned IOPS SSD to multiple EC2 Nitro instances in same AZ
    • Not boot volume
  • Metrics
    • BurstBalance
    • VolumeReadBytes, VolumeWriteBytes
    • VolumeReadOps, VolumeWriteOps
    • VolumeQueueLength

Elastic Compute – EC2

  • Good for traditional OS + application, long running compute workloads
    • Server-style applications
    • Burst or steady-state load
  • Instance runs in a single AZ
    • A network interface is in one subnet in that AZ
    • Attached EBS must live in the same AZ
    • Instance stays on a host until it is stopped and restarted
      • But it always stays in the same AZ
  • States
    • Pending
    • Running
    • Stopped
    • Terminated
  • Instance can move to a new host when starting after stopped/hibernating
  • Status Checks
    • System status check
      • Verifies instance is reachable via network
        • Does not validate that OS is running
      • Checks the EC2 host
      • Failure could mean
        • Loss of system power
        • Loss of network connectivity
        • Host software / hardware issues
    • Instance status check
      • Verifies that OS is receiving network traffic
      • Failure could mean
        • Corrupt filesystem
        • Incorrect instance networking
        • OS kernel issues
    • Status check alarm
      • Cloudwatch alarm that fires if there is a system status check failure
      • Alarm action can be
        • Recover
          • Moves instance to another host and starts it
          • Only works for certain instance types in a VPC + EBS
          • Will try up to 3 times to recover
          • Keeps the same public and private IP address
        • Reboot
        • Stop
        • Terminate
  • Instance types
    • Raw CPU, memory, local storage capacity
    • Resource ratios
    • Storage and data network bandwidth
    • System architecture / vendor
    • Additional features / capabilities
  • Families
    • General Purpose
      • T, M, A
      • The default choice
      • Balance of compute, memory and networking
      • Examples
        • A1, M6g: Graviton ARM.  Efficient
        • T3, T3a: Burst pool.  Cheaper assuming nominal low usage
        • M5, M5a, M5n: Steady state workload.  Intel
    • Compute Optimized
      • C
      • Compute-bound applications
      • Examples
        • C5, C5n: Scientific, gaming, machine learning, media encoding
    • Memory Optimized
      • R, X, High Memory
      • Large memory-resident data sets
      • R for RAM
      • Examples
        • R5, R5a: Real-time analytics, cache servers, in-memory DBs
        • X1, X1e: Large in-memory apps.  Lowest $ per GB
        • High Memory: Largest memory capacity machines in AWS
        • z1d: Large memory and CPU with directly attached NVMe
    • Accelerated Computing
      • P, Inf, G, F, VT
      • Hardware accelerators, co-processors, GPUs, FPGAs
      • P for parallel, G for GPU, F for FPGA
      • Examples
        • P3: Tesla v100 GPU, parallel processing and machine learning
        • G4: NVIDIA T4 Tensor GPU.  Machine learning and graphics processing
        • F1: FPGA.  Genomics, financial analysis, big data
        • Inf1: Machine learning – recommendations, forecasting, voice recognition
    • Storage Optimized
      • I, D, H
      • High throughput and low latency disk I/O, 10ks of IOPS
      • Examples
        • I3, I3en: Local high performance NVMe SSD. NoSQL, data warehousing
        • D2: Dense storage (HDD).  Data warehousing, Hadoor, distributed file systems.  Lowest price disk throughput
        • H1: High throughput, balance CPU/memory, HDFS, Kafka
  • Data transfer between two instances in same region is free
  • Purchase options
    • On demand
      • The default purchase option
      • Billed per second
      • Billed only when running
        • Storage is billed always
      • No capacity reservation
      • Predictable pricing
        • No upfront cost
        • No discount
      • New or uncertain application needs
      • Short-term, spiky or unpredictable workloads that can’t tolerate any disruption
    • On-demand capacity reservations
      • Ensures that you have capacity in an AZ
      • No commitment and can be created and canceled as needed
      • No price discount or term requirements
      • Good to use with Regional Reserved instances which do not reserve capacity
        • Zonal Reserved instances do reserve capacity
      • When creating the reservation, specify
        • AZ
        • Number of instances
        • Instance attributes
          • Instance Type, tenancy, OS
      • Only instances that match the attributes will use the reservation
        • If there are running instances that match they will be used for the reservation
      • Billing starts as soon as the matching capacity is provisioned
        • You will be charged for the capacity whether or not you use it
        • When you no longer need it, cancel it to stop incurring charges
      • A reservation counts against your per-region instance limits even if it is unused
      • You can share capacity reservations with other accounts
      • Limitations
        • Cannot use in placement groups
        • Cannot use with dedicated hosts
    • Spot
      • Prices fluctuate
      • Save up to 90% based on spare capacity
      • Set maximum price you’ll pay
        • If spot price goes above max price instance is terminated
      • Applications that have flexible start/end times
      • Applications that only make sense at low cost
      • Applications must be able to tolerate termination
      • Bursty capacity needs
      • Cannot use persistent storage
      • Block spot instances from terminating using Spot Block
      • Spot Fleet: collection of Spot and optionally On-Demand instances
    • Reserved
      • Reserve capacity for 1 or 3 years
        • 3 years has larger discount
      • Billed whether instance is running or not
      • Scenarios
        • Known steady-state usage
        • Lowest cost for apps that cannot tolerate disruption
        • Need reserved capacity
      • Standard RIs 
        • Save up to 72%
        • Steady-state usage
      • Convertible RIs
        • Save up to 54%
        • Can change the attributes of the RI as long as it’s more expensive
        • Steady-state usage
      • Scheduled RIs
        • Long term usage which doesn’t run constantly
        • Launch within a time window every day
        • Fraction of a day, week, month
        • Minimum 1,200 hours per year for an instance
        • You cannot purchase Scheduled Reserved Instances at this time. AWS does not have any capacity available for Scheduled Reserved Instances or any plans to make it available in the future. To reserve capacity, use On-Demand Capacity Reservations instead. 
      • Payment options
        • No upfront
          • Reduced per-second fee for instances
        • Partial upfront
          • Additionally reduced per-second fee for instances
        • All upfront
          • No per-second fee for instances
      • Zonal or Regional Scoping
        • Zonal reserves capacity for the specific instance type
          • If there are capacity issues in an AZ the reservation takes priority
          • Zonal is default
        • Regional scope applies discount to any instances in the family
          • Launched in any AZ
          • Does not reserve capacity
          • Based on a normalization factor
          • For example
            • A t2.medium instance has a normalization factor of 2
            • If you purchase a t2.medium default tenancy Reserved Instance in us-east-1 and you have two running t2.small instances in your account in that Region, the billing benefit is applied in full to both instances. 
            • Or, if you have one t2.large instance running in your account in the us-east-1, the billing benefit is applied to 50% of the usage of the instance. 
      • Reserved Instance Marketplace
        • A way to sell remainder of reservation
        • Reservations grouped by time remaining
        • You set the price
        • Buyer automatically buys the lowest priced reservation that matches
        • Restrictions
          • Convertible Reserved cannot be sold
          • Must have held for at least 30 days
          • Only Zonal RIs can be sold
          • AWS charges 12% of selling price
    • Savings Plans
      • Commitment for 1 or 3 year term
      • Products have an on-demand rate and a savings plan rate
      • Resource usage consumes savings plan commitment at the reduced rate until the committed usage is fully used
        • Then the rate switches to regular on-demand rate
      • Two flavors
        • Compute Savings Plans
          • Save up to 66%
          • Applies to any EC2, Fargate and Lambda
          • Any region, any OS, any tenancy
        • EC2 Instance Savings Plan
          • Save up to 72%
          • Commitment is to an individual instance family in a specific region
            • Flexible within that family and region
          • Any OS or tenancy
  • Dedicated Hosts and Instances
    • Dedicated Hosts
      • Physical server dedicated to you that you can launch EC2 instances on
      • No per-instance charges – you pay for the host
      • Choose the host family (eg R5) and then divide that host into instances
      • Useful for per-socket/per-core licenses
        • Windows Server, SQL Server, SUSE Linux Enterprise
      • Useful for compliance requirements
      • You have visibility and control over how instances are placed on the host
      • You have visibility of the number of sockets and physical cores
      • You can either manually place instances or have AWS License Manager automatically place them
      • Can choose On-Demand or Reserved
      • Allows you to deploy instances to the same physical hardware over and over
        • Also if you start/stop instance it will be placed on the same host
        • “Affinity”
      • Has visibility of number of cores and socket
      • Supports Auto Scaling
      • Hosts can be shared between accounts in organization using RAM
      • Limitations
        • Cannot use certain AMIs
          • License-included RHEL / SUSE
        • Cannot use RDS
        • Cannot use Placement Groups
    • Dedicated Instances
      • Instances running on hardware dedicated to you
      • No other account’s instances will run on the same hardware
        • But other instances from your account might run on it
      • Choose Dedicated Tenancy when launching instance
        • You can set default tenancy at the VPC level
      • Supports On-Demand, Reserved and Spot
      • Doesn’t work for licensing use cases
      • Can’t choose the host that your instance will be placed on
  • AMI: Amazon Machine Image
    • Used to launch EC2 instances
    • AWS, marketplace or community provided
    • Regional
      • Only works in that region
    • Cannot be edited
    • Contains
      • Permissions
        • Public, Owner, specific accounts can use
      • Root volume as either
        • EBS snapshot
        • Template for instance store volume
          • Which OS and software is installed
      • Block device mapping
        • Maps volumes to OS device
    • Lifecycle
      • Launch
        • Attach EBS volumes
      • Configure
        • Set up attached EBS volumes
          • Install software, etc
      • Create AMI
        • “Baking”
        • Snapshots are created from EBS volumes
        • Block device mappings created 
      • Launch
        • New instance has same # of EBS volumes created from the snapshots
          • Can increase size / change type of volumes
    • Backed by (root device is) either
      • EBS Volume
        • AMI stored as EBS snapshot
        • Boots in < 1 minute
        • Instance type can be changed after stopping
      • Instance store
        • Stored in S3
        • Baked using Packer, etc
        • Boots in < 5 minutes
    • Encryption
      • At launch
        • Will use encryption of source snapshot by default for launched volume
        • You can override this to encrypt a volume from an unencrypted snapshot
        • If Encryption By Default is enabled for the region the launched volume will be encrypted using the default key
      • When copying AMIs
        • Works similarly to at launch
  • Instance store
    • Physically connected to an EC2 host
      • Instances on that host can access the volume
    • Included in price of instance
    • Must be attached at launch time
      • Cannot attach post-launch even if stopped
    • Instance store data lost if EC2 instance
      • Stopped, terminated or hibernated
      • Moved to another host
      • Size changed
      • Host or volume hardware failure
    • Instance store data not lost on reboot
    • Much better performance than EBS
      • D3 instance type: 4.6 GB/s throughput
      • I3 instance type: 16 GB/s sequential throughput, up to millions of IOPS
    • Might be useful if
      • Application replicates its data
      • Super high performance needed 
  • Connecting to EC2
    • ssh (port 22) Linux
    • RDP (port 3389) for Windows
    • Need key pair for either Linux or Windows 
      • Used to decrypt Administrator password for Windows
    • Instance connect
      • Console-based ssh access
        • Requires security group allowing access on port 22 to all of the region’s prefixes 
        • Requires public IP
      • Also support CLI-based access from machines in AWS
      • Generates a one-time key pair and pushes the public key to the instance
    • Session manager
      • Can be used to connect to private instances
      • Requires
        • Instance policy
        • Agent installed on instance
          • Pre-installed on AWS AMIs
        • Access either to public internet or endpoints to
          • com.aws.region.ssm
          • com.aws.region.ssmmessages
          • com.aws.region.ec2messages
      • Connect via console web UI, AWS CLI or SDK
      • Fully auditable
        • Designed to be like Thycotic
      • No inbound ports required
      • Use instead of bastion hosts
      • Redirect ports from instances to local computer
      • Allow just a single command or set of commands
  • Security groups
    • Changes take effect immediately
    • Any number of instances can use a SG
    • An instance can use multiple SGs
    • All inbound traffic is blocked by default
    • All outbound traffic is allowed by default
  • Bootstrap script (User Data)
    • Runs only when instance first launched
      • You can change it but it doesn’t get executed again
        • You won’t see the changed data until you reboot
    • OS gets it from http://169.254.169.254/latest/user-data
      • Blindly runs it as root.  Doesn’t know if it succeeds or fails
    • Log output at /var/log/cloud-init-output.log
    • Not secure – don’t pass passwords, etc
    • Limited to 16KB
    • Use in conjunction with AMI baking
      • eg, configuration data in user data
  • Metadata / user data accessible at private IP 
  • Elastic IPs
    • Static, public IPv4 address
    • Tied to a region
    • Tied to a network border group (what is this?)
    • Hourly charge if it is not associated with an instance, or if the instance is stopped
    • Limit of 5 per region per account
      • Unless you brought the IP address with you
    • You can associate an Elastic IP with an instance or a network interface
      • If with an instance it is also associated with its primary network interface
    • Useful for reverse DNS records for email
  • Networking devices
    • ENI: basic networking
      • eg. management networks
    • ENA: enhanced network adapter
      • Speeds between 10 and 100 Gbps
    • EFA: Elastic Fabric Adapter
      • HPC, Machine Learning, OS-bypass
      • Linux only
      • Cannot be added to a running instance
    • An EC2 instance with multiple network interfaces
      • Each interface can be in different subnet in the instance’s AZ
    • Attributes of an interface
      • MAC address
      • Primary private IPv4 address
        • Automatically assigned from the subnet CIDR
        • Maintained across stop/start
        • A VPC-local DNS name is created to match it
          • Ip-10-16-0-10.ec2.internal
      • Zero or more secondary IP addresses
      • Zero or one public IPv4 address
        • Required if the instance needs to communicate with the internet or AWS services with public endpoints
        • This will change if the instance is stopped, hibernated or otherwise moves hosts
        • The operating system never sees this address
          • It’s managed by the Internet Gateway via NAT
        • Allocated a public DNS name
          • ec2-3-89-7-136.compute-1.amazonaws.com
            • Inside VPC – resolves to private IPv4
              • So traffic doesn’t need to leave VPC
            • Elsewhere – resolves to public IP
      • Zero or one Elastic IP per private IPv4 address
        • If assigned to the primary network interface it replaces the public IPv4
          • And the public DNS name changes to the the Elastic IP address
          • If the Elastic IP is removed from the interface it gets a new public IPv4
            • Not the same as the original public IPv4
      • Zero or more IPv6 address
        • Only supported by modern instance types
        • Needs to have an AMI that is configured for DHCP6, or manually configured
        • As of November 23, 2021 you can have IPv6-only subnet that assigns a IPv6-only address to the primary adapter
      • Security groups
      • Source/destination check setting
    • Secondary network interface
      • If you use a software license tied to the MAC address you can detach the interface from the instance and move it to another instance
        • License portability
      • Useful for applying additional security groups to an instance
    • Attaching network interfaces
      • Hot attach: while instance is running
      • Warm attach: while instance is stopped
      • Cold attach: while instance is being launched
      • You cannot bond interfaces to increase bandwidth
  • Enhanced Networking
    • Uses SR-IOV
      • NIC is virtualization-aware
        • ENA or Intel 82599 VF
      • No charge – available on most EC2 types
        • But needs to be enabled
      • Higher bandwidth and lower host CPU usage
        • Hypervisor does not need to mediate access to NIC
      • Lower latency
  • EBS Optimized
    • Historically network was shared between data and EBS
    • EBS Optimized means instance has dedicated network capacity for EBS
    • Most modern instance types support it and have it enabled by default at no charge
      • Some types support it at extra cost
    • Required by instance types that offer higher performance
      • Especially when using gp2 and io1 volume types
  • Placement groups
    • Cluster
      • Pack instances close together
      • For maximum performance and best latency
      • Low network latency for HPC, etc
      • Same rack in one AZ
        • Could even be the same host
      • All members have direct network connections to each other
        • 10 Gbps single stream performance
          • vs. normal 5 Gbps between instances
        • Lowest latency and max packets per second in AWS
      • Should use homogenous instance types
        • Requires supported instance type
        • Should use Enhanced Networking
      • Launch all at the same time
      • Can span VPC peers – but impacts performance
      • If you receive a capacity error when launching an instance in a placement group that already has running instances
        • Stop and start all of the instances in the placement group, and try the launch again.
        • Restarting the instances may migrate them to hardware that has capacity for all the requested instances.
    • Spread
      • Keep instances separated
      • For maximum availability and resilience
      • Use case
        • Small number of critical instances that need to be kept separate from each other
      • Makes sure each instance is placed on a separate rack
        • Separate power and networking
      • One instance per rack
      • Can span multiple AZs
      • Maximum of 7 instances per AZ
        • Hard limit
      • Dedicated hosts/instance are not supported 
    • Partition
      • Groups of instances spread apart
      • Balances HA and performance
      • Separate racks, multiple instances per rack
        • Separate power and network per rack
      • Maximum of 7 partitions per AZ
        • No limit on number of instances
      • You can launch an instance into a specific partition
        • Otherwise EC2 will automatically distribute new instances across partitions
      • An EC2 instance can query the partition that it is in to inform the topology-aware application
        • HDFS, HBase, Cassandra
      • Dedicated hosts/instance are not supported 
    • Only certain kinds of instances can be in Placement Groups
      • Compute / Memory / Storage-optimized, GPU
    • Recommend that you launch all of the instances in a single launch request and use the same instance type
      • This avoids not being able to launch an instance into the group later due to lack of capacity
      • If you get a capacity error launching an instance into a group stop all of the existing instances and relaunch them all including the added instance
    • Cannot merge placement groups
    • You can move an instance into/out of/between a placement group 
      • via CLI or SDK only currently
      • Must be stopped first
  • Hibernation
    • Preserves current RAM onto EBS
      • EBS volume must be encrypted
    • Much faster to boot because OS+apps don’t need to be initialized
    • Available for On-Demand and Reserved instances
    • Supported instance families: C, M, R
    • Cannot be hibernated for > 60 days
    • Instance RAM must be < 150 GB
  • To move an EC2 instance to another region
    • For each EBS volume
      • Snapshot
      • Copy snapshot to the new region
    • Create EBS volumes in new region
    • Start instance in new region with volumes attached
  • Instance roles and profiles
    • Best practice for providing an instance with permissions
    • An instance can be assigned an instance profile which contains a single role
      • In the console when you assign a role to an EC2 instance it really creates an instance profile for the instance with the same name as the role
    • Delivered to instance via instance meta-data
      • latest/meta-data/iam/security-credentials/role-name
    • Credentials from role are automatically rotated and always valid
      • If the application has cached the credentials it needs to get new ones after they expire
  • Termination protection
    • You can disable termination for an instance
    • The permission the change the termination protection is separate from the permission to terminate
      • So you can have role separation
  • Shutdown behavior
    • You can change the shutdown behavior such that it terminates instead of stopping
      • To avoid having a bunch of stopped instances lying around
      • Doesn’t respect termination protection

Elastic Container Service – ECS

  • Container orchestrator
  • Simpler than K8s
  • Default option for exam
  • Container definition
    • What image to use
    • What ports to expose
  • Task
    • Like a pod – one or more containers
    • Attach Task Role to a task for permissions
    • Resources: RAM + CPU
    • Network mode: host, awsvpc, none, bridge
      • awsvpc is the default (EC2 and Fargate)
        • Creates host ENI per task (required for Fargate)
        • Containers can use any port
        • You can use security groups to control inter-service communication
        • There is a limit of number of ENIs per host
          • ENI trunking is used to solve
          • Only supported by certain instance types
          • Increases host startup time
          • The trunk ENI can be created in a different subnet so you don’t run out of IP addresses
      • bridge (EC2 only)
        • Bridge is connected to the host’s ENI
        • Static mode: host port and container port is the same
          • Container port must be unique per host
        • Dynamic mode: host port is ephemeral
          • Allows containers on a host to use the same container port
          • ECS keeps ELB updated with proper host+port
          • Because any port could be used you can’t control inter-service communication
    • Volumes
    • Environment variables per container
      • Securing environment variables
        • Use ARN of Parameter Store variable or Secrets Manager secret for value
        • Task execution role must have
          • ssm:GetParameters or secrets manager:GetSecretValue
          • kms:Decrypt
          • Access to kms key if non-default
  • Service definition
    • Like a K8s Deployment + Service
    • Can add load balancer
  • Cluster modes
    • EC2 mode
      • Can use multiple AZs 
      • Use EC2 as container hosts in your VPC
      • Uses an Autoscaling Group
      • ECS provisions the instances but you manage them
        • Can use reserved, spot, etc.
        • You pay for the EC2 instances even if no containers are running on them
      • Use for large, price conscious workloads
    • Fargate if you want a fully managed instance 
      • Runs on Fargate Shared Infrastructure
        • ENIs injected into VPC
        • If the subnet is public the containers will get public IPs
      • No patching, etc
      • Only pay for resources containers are using
      • For long-running container jobs, it can be more expensive than ECS on EC2
      • Use cases
        • Large workload + Admin overhead conscious
        • Small / burst workloads
        • Batch / periodic workloads
  • Service Autoscaling
    • ECS publishes Cloudwatch metrics
      • CPU utilization
      • Memory utilization
    • Scaling policies
      • Target tracking
      • Step scaling
      • Scheduled scaling
  • Launching tasks from Cloudwatch Events
    • ECS can be the target of an Cloudwatch Events rule
    • Can launch a task as the result of an API call
      • For example S3 object puts

Elastic File System – EFS

  • NFSv4
  • Private service – Runs in a VPC
    • Data is stored across multiple AZs in a region
      • Except One-Zone storage classes
    • Each AZ should get a mount target in each AZ
      • Allocates an address in a specific subnet in the AZ
      • Mount target is the “address” that you mount in EC2
      • Each mount target has a security group
        • Must allow NFS port (2049)
        • Will get default VPC SG if not specified
    • Filesystem can be accessed concurrently from all AZs 
      • Application can fail over to different AZ in case of AZ outage
  • Special EFS client software is only available on Linux
  • Supports thousands of concurrent connections
  • Use access points to manage application access
  • Can be accessed from on-premises via VPN or DX
  • Only pay for storage you use
  • Storage auto-expands up to petabytes
  • Read-after-write consistency
  • Performance modes
    • General Purpose
    • Max I/O 
      • Scales throughput at the expense of latency
  • Throughput modes
    • Bursting
      • EFS price is based on filesystem size
      • Credit system based where larger filesystems have more throughput
      • When you are out of credits the throughput will be throttled to the baseline for the filesystem size
      • New filesystems get 2.1 TB of burst credits
    • Provisioned
      • If you need throughput greater than bursting 
      • And/or for longer periods of time
      • And/or for smaller filesystems
      • Charge is for storage (based on size) + provisioned throughput
  • Metrics
    • PermittedThroughput 
      • Current allowed throughput including burst credits
    • BurstCreditBalance
      • Values close to zero mean you are almost out of credits
      • You are probably saturating the available bandwidth for the filesystem
    • MeteredIOBytes
      • Write throughput + (1/3) Read throughput
    • TotalIOBytes
      • Write throughput + Read throughput
    • ClientConnections
      • Number of clients connected 
  • Storage classes
    • Standard
    • Infrequent Access
      • First byte latency for reading/write is higher
      • Per-GB retrieval fee
    • One-zone
      • Backups are enabled by default for one-zone classes
    • One-zone IA
    • Lifecycle policies a la S3
      • Do not apply for files < 128KB

Elastic Kubernetes Service – EKS

  • Great for hybrid architectures (some on-prem)
  • Can use Fargate

Elastic Beanstalk

  • PaaS
    • Bring your code, everything else is provided and managed
      • Capacity provisioning, load balancing, autoscaling, monitoring, log file access
      • Runs in multiple AZs
  • Automatically provisions EC2 instances
    • Can login to the boxes
  • Supports containers, Linux and Windows
  • Supports web apps built with Java, .NET, Node.js, Python, Ruby, Go, Docker
  • Automatically add RDS, ElastiCache, etc.
  • Can run in VPC
  • Automates deployments
    • Staging then Production
  • Application code stored in S3
  • Log files stored in S3 or Cloudwatch
  • Provide limited or full access to Beanstalk for IAM users
  • Runtimes are automatically upgraded for patch and minor version updates
    • You must manually initiate major version upgrades

ElastiCache

  • For read-heavy workloads
  • Reduces database workloads
  • Can be used to store session data
    • For stateless apps
  • Sub-millisecond access
  • Deployed into a VPC
  • Best to use memory-optimized EC2 instances
  • Application needs to know about the cache
  • Memcached
    • Key/value where values are strings
    • No replication or multi-AZ
      • Sharding only
    • No backups
    • Multi-threaded
  • Redis
    • Advanced data structures
    • Can be used as cache or DB
    • Multi-AZ replication
      • Scale reads
    • Supports backup/restore
    • Single-threaded
    • Supports transactions
      • Atomically apply multiple operations
    • Can enable AOF persistence, but read replicas / multi-AZ is preferred
    • Can require authentication
      • Include parameters –auth-token and —transit-encryption-enabled
  • IAM Auth not supported

Elasticsearch / OpenSearch

  • Fully managed version of Elasticsearch open source
    • Version 1.5 to 7.10 of Elasticsearch
    • OpenSearch is a fork
  • Use cases
    • Search server
    • “Third party logging solution” ELK stack
  • Input
    • Kinesis Firehose
    • Logstash
    • Elasticsearch/Opensearch API
    • OpenTelemetry or X-Ray traces
  • Deployment
    • Cluster is a called a “Domain”
    • One, two or three AZs
      • Three is recommended
    • Can have dedicated masters
      • Distributed across AZs
      • Three is recommended
    • Upgrades
      • No downtime
  • Storage
    • Instance storage or EBS
    • 3 PB max with 200 nodes
    • UltraWarm
      • Low cost warm storage tier for older/less frequently accessed data
      • Uses S3
      • Read only
      • Interactive performance
      • Up to 3 PB
    • Cold storage
      • Detach indices from UltraWarm
        • Can reattach in seconds
      • Up to 3 PB 
  • Replication
    • Optional
    • Distributes primary and replica shards across instances in multiple AZs
    • Cross-cluster replication
      • Synchronize indices across clusters
      • Can go across regions
  • Snapshots
    • Automated hourly snapshots
    • Retained for 14 days
    • Deleted if cluster deleted
    • No charge
    • Can take manual snapshots
  • Logs
    • Error logs
    • Search slow logs
    • Indexing slow logs
    • Slow logs are enabled per-index
  • Trace Analytics
    • OpenTelemetry-compatible distributed trace storage and query

Elastic Load Balancer – ELB

  • Architecture
    • Highly available and scales automatically
    • Load balancer should reside in at least two subnets from different AZs
      • ALB is required to use at least two
      • Each subnet must be at least /27 with at least 8 free IP addresses
      • Depending on traffic the LB could scale up to 100 IP addresses across all subnets
    • Comprised of LB nodes, at least one per AZ
      • LB has a DNS name managed by Route 53
        • Route 53 returns addresses for all of the nodes in the LB
        • DNS round robin is used to LB between nodes
        • Use this as the origin for Cloudfront
      • By default each ALB node balances between all targets in all zones equally
        • See cross-zone load balancing below
    • Internet-facing or internal
      • Internet-facing
        • Nodes have public IP addresses
        • Can access public and private EC2 instances
      • Internal
        • Nodes have private IP addresses
        • Can access private EC2 instances
    • Cross zone load balancing
      • When enabled each node balances over all targets in all zones
      • When disabled each node balances across only targets in its AZ
      • With ALB, cross-zone is enabled by default
      • With NLB, cross-zone is disabled by default
  • Application Load Balancer – Layer 7
    • Pricing
      • Hourly rate plus
      • LCU (capacity) rate
        • New connections
        • Active connections
        • Processed bytes
        • Rule evaluations
    • Listener
      • HTTP/S or gRPC, Websockets
      • Checks for connection requests on a port and protocol
      • HTTPS listeners need a certificate to terminate SSL
        • Always terminates SSL – cannot pass through
      • Rules
        • Determines how LB routes requests to targets
        • Each rule consists of a priority, one or more actions, and one or more conditions
          • There is a default catch-all rule
        • Conditions
          • Host+port, path, query string, headers, method, source IP
        • Actions
          • Forward (to target group), redirect, fixed-response, authenticate (OIDC/Cognito)
    • Target groups
      • Types
        • EC2 instance, ECS task, EKS pod, Lambda, IP address
      • Each target group routes requests to one or more registered targets on port and protocol
      • A target can be a member of multiple groups
      • Addressed via rules
      • IP address targets
        • Instances in a peered VPC
        • On-premises resources
        • AWS resources that have address and port
    • Health checks
      • Configured on Target groups
      • TCP (default), HTTP/S 
        • HTTP/S checks are layer 7
    • Slow start mode
      • Allows a target to warm up before getting full load
      • Starts when target is healthy
      • Linearly increases load across the slow start duration
    • Authenticate users using OIDC, Cognito social sign-on or SAML/AD/LDAP
      • So apps don’t have to worry about it
    • No static IP addresses
      • The IP addresses change over time
        • This can cause issues for firewalls with outgoing rules
      • NLBs have static IPs per AZ
      • One option is to use NLBs to route to ALBs
    • WAF support
  • Network Load Balancer – Layer 4
    • Use when extreme performance is needed
      • Millions of requests per second
    • Supports TCP + UDP
      • But doesn’t understand layer 7 HTTP/S
        • No session cookies, no session stickiness
      • Forwards TCP to instances 
        • Including SSL direct to instances
    • SSL supported: need certificate to terminate at LB
    • Health checks
      • TCP (default), HTTP/S
    • Privatelink
      • The service end of a Privatelink is a NLB
      • The client end is an ENI in a subnet in a separate VPC
      • Allows an architecture where there is a single VPC that is internet facing
        • There is an ALB in the internet-facing VPC with Privatelink targets
        • The other end of the Privatelinks are applications in private VPCs
        • The application is exposed as an NLB in the private VPC
      • The alternative to this approach is VPC peering which has downsides
        • Cannot have overlapping CIDRs in VPCs
        • Communication is bidirectional between peer VPCs requiring admin overhead to lock down
    • Static IP per AZ, can use Elastic IPs
      • Can use for firewall whitelisting since the IP doesn’t change
      • This is unlike ALB where the IP does change
        • Can have ALB as a target of NLB as a workaround
    • Flow logs instead of access logs
    • Pricing
      • Hourly rate plus
      • NLCU (capacity) rate
        • New connections or flows
        • Active connections or flows
        • Processed bytes
  • Choosing between ALB and NLB
    • Choose NLB for 
      • Unbroken encryption
      • Static IPs
      • Fastest performance (millions RPS)
      • Privatelink
    • Otherwise choose ALB
  • Classic Load Balancer – Layer 4 / 7
    • Avoid
    • Doesn’t support SNI
      • Cannot have multiple certs per LB
      • Can consolidate multiple CLBs to one ALB
    • Supports TCP/SSL and HTTP/S
      • Terminates SSL
    • If EC2 classic is used must use classic LB
    • 504 error means application has not responded within idle timeout across all targets
    • Pricing
      • Hourly rate plus
      • Per GB rate
  • Gateway Load Balancer
    • Deploy and scale third-party virtual appliances
      • Firewalls, intrusion detection, etc
    • Transparent, inline network gateway
      • All data for VPC passes through it (in and out)
      • Via GWLB endpoints
        • Route table at internet gateway points to GWLB endpoint
        • Route table in subnets have default route of GWLB endpoint
      • Packets are tunneled to appliances using GENEVE protocol
        • After packet processed by appliance it is forwarded to destination in VPC
    • Balances across multiple appliances
    • Scales virtual appliances to meet demand
    • Pricing
      • Hourly rate plus
      • GLCU (capacity) rate
        • New connections or flows
        • Active connections or flows
        • Processed bytes
  • SSL
    • Can use Certificate Manager to manage certs
    • Server Name Indication
      • Multiple certs per load balancer
      • Don’t need to use wildcard certs
      • Supported by ALB and NLB
    • Types of LB SSL handling
      • Bridging
        • SSL terminated on the LB
          • Certificate stored on LB
        • LB initiates a new SSL connection to backend
          • Backend needs a cert as well
          • Crypto overhead can be significant
      • Passthrough
        • NLB-only
        • No decryption happens at LB
        • Backend instance must have cert
        • No certificate exposure to AWS
      • Offload
        • SSL terminated at the LB
        • LB connects to backend instances using HTTP
        • No certificate needed on the backend instances
  • Sticky sessions
    • Classic and ALB only
    • Use cookie to route requests from same user to same target (group)
      • AWSALB is the cookie name for ALB
      • Cookie expires in 7 days
    • If target fails LB or cookie expires, LB will re-route to healthy target and re-stick it there
    • For ALB it sticks to a target group
  • Deregistration delay / Connection draining
    • Keep existing connections open if target becomes unhealthy
    • Optional setting
  • Client IP address
    • X-Forwarded-For header 
      • Used by CLB and ALB
      • Has IP of requester
    • NLB preserves the source IP address
      • This can be disabled
  • Access logging is optional
    • Encrypted and stored in S3
  • Deletion protection
    • Cannot delete load balancer when it is protected
  • Security groups
    • Each load balancer has a security group in front of it
    • Allow only the application port(s)
    • Set target security groups to allow only the LB SG on the application and health check ports

Elastic Map Reduce – EMR

  • Runs open source big data tools
  • Uses EC2
    • Can take advantage of spot instances, reserved instances, etc.
    • Lives in a single AZ in your VPC
  • Can also use EKS
  • Uses standby masters for HA
  • Supports notebooks and EMR Studio, an IDE
  • Different flavors
    • Hive
    • Hudi
    • Impala
    • Pig
    • Presto
    • HBase
    • Spark
  • Apache Airflow for scheduling

Eventbridge / Cloudwatch Events

  • Fastest way to respond to things happening in AWS
    • Near real time
  • Serverless event bus
  • Passes events from a source to an endpoint
  • EventBridge is basically Cloudwatch Events v2
    • AWS recommends moving to Eventbridge as a replacement for Cloudwatch Events
  • Any API call can trigger a rule
    • If X happens / or at Y time: Do Z
      • Y is in cron format
  • Creating a rule
    • Event pattern or scheduled
      • Service provider and name (source)
      • Event type
        • For resources that aren’t natively supported by Eventbridge you can choose
          • AWS API Call via CloudTrail
      • State of event
    • Select event bus: AWS account default or partner/custom
    • Select target(s)
      • Lambda, Kinesis, SQS, SNS, etc
    • Tag

Event Driven Architecture

  • Producers
    • Generate events
  • Consumers
    • Consume events
    • Don’t need to be consuming CPU when idle
  • Event router
    • Has an event bus
    • Producers put events on the bus
    • Router routes events to appropriate consumers
  • A mature event-handling system only consumes resources when handling events
    • Aka serverless

Fargate 

  • Lightweight VM for running container-based workloads
    • One VM per container
    • VM spun up for container and destroyed after
      • Fargate is therefore considered “Serverless”
  • AWS manages infrastructure
    • For example, patching
  • No spot, reserved, etc.
    • Pay for resources allocated and time ran
      • Choose RAM and CPU at task creation time
    • More expensive than EC2
      • Pay for simplicity and fully managed experience
  • Primarily for short-running containers
  • Use vs. Lambda for more consistent workloads
  • Requires ECS or EKS
  • Linux only

Firewall Manager

  • Centrally manage firewall rules across accounts
  • Manage
    • WAF rules
    • Shield Advanced Protection
    • Security Groups
    • Network firewall rules
    • Route 53 DNS firewall rules
    • But not NACLs

FSx for Lustre

  • High performance, for
    • Machine learning
    • Big data
    • HPC
    • Financial modeling
    • Video processing
    • EDA, etc.
  • Hundreds of GB/s, millions of IOPS
  • Consistent sub-millisecond latency
  • Availability/Durability ranges from 99.1% to 99.9% depending on filesystem size
  • Automatic daily backups kept for 7 days by default or up to 35 days
    • When filesystem is deleted, auto-backups are deleted as well
    • Can also do manual backups
    • Can also set up AWS Backup jobs
  • Data at rest is always encrypted using KMS keys
  • Data in transit is encrypted from supported EC2 instances
  • Lives in a VPC and access controlled by security groups
    • ENI created in the filesystem’s AZ
    • Accessible via VPN / Direct Connect
      • Assuming you have the bandwidth
  • Linux clients only
    • POSIX compliant
    • Requires client software install or CSI driver for EKS
  • Filesystem can be accessed by thousands of clients
  • Can specify in AWS Batch Launch Template
  • Can optionally link to an S3 bucket, objects in bucket presented as files
    • Data is lazily loaded from S3 to Lustre
    • Multiple filesystems can link to the same S3 bucket
    • Must explicitly commit changes back to S3
  • Scratch filesystems vs. persistent filesystems
    • Scratch
      • Cost-optimized 
      • High performance 
      • Short-term 
      • No replication
        • Larger filesystems mean more servers/disks, greater chance of failure
    • Persistent
      • Longer term
      • HA within one AZ
      • Self-healing
  • Support user/group quota
  • LZ4 compression optional
  • Grow filesystem by manual request
    • Minimum size is 1.2TB, grow in increments of 2.4TB
  • Throughput is provisioned based on storage amount
    • Scratch
      • 200 MB/s per TB read
      • 100 MB/s per TB write
    • Persistent
      • 50, 100 or 200 MB/s per TB
      • Can burst using credit system
  • Metadata stored separately from payload data

FSx for Windows

  • Uses SMB
  • Supports any OS that supports SMB
  • Centralized storage for Windows-based apps like Sharepoint, SQL Server, IIS, etc.
  • Integrates with Directory Service or Self-Managed (on-prem) AD
    • Supports Windows permission model
  • Lives in a VPC
    • Supports VPC Peering, Transit Gateway, shared VPCs
  • Access from on-prem using DirectConnect or VPN
    • Can also use File Gateway for caching
  • Throughput scales based on filesystem size
    • 8 MBps to 2 GBps
    • Burst 600 MBps -> 3 GBps
    • < 1ms latency
  • Store up to 64 TB per filesystem
  • Data at rest always encrypted
  • Data in transit encrypted by SMB Kerberos (requires client that supports SMB 3.0+)
  • Supports volume shadow copies (VSS)
    • File level versioning
    • Right-click on a file, see previous versions, restore from a previous version
    • Not enabled by default
  • Supports Distributed File System (DFS)
    • Group shares into one common tree 
  • Automatic daily backups (incremental) stored in S3 kept for 7 days (by default)
    • Can take additional backups at any point
  • Data is replicated within single AZ
  • Also supports multi-AZ filesystems
    • Synchronous replication to a standby file server in another AZ
    • Automatic failover
    • Linux clients don’t automatically re-connect to the failover server 
  • Supports HDD or SSD
  • Optional Data Deduplication (supported by Windows)
  • User/group quotas

Global Accelerator

  • Manages two static IPs that are in front of other IPs that may go down or change
    • The IPs are global but will route to the AWS edge location closest to the source (anycast)
    • Solves IP caching issue
    • You can BYoIP
  • Routes traffic to a particular resource or set of resources (LBs, EC2s, Elastic IPs, etc)
    • Traffic goes from edge over Amazon’s internal network to reach resource
    • Can adjust region weight
      • Default is 100% for each region
      • Can use for A/B testing, deployment
    • Routes to optimal endpoint based on performance
      • Healthy endpoint that is nearest the user
    • Will failover to other region if needed in 30 secs
  • Endpoint groups
    • An endpoint group and the endpoints within it are in one region
    • GA directs traffic to endpoint groups based on the location of the client and the health of the endpoint group
  • Health checks
    • Uses ELB health checks for ELB endpoints
      • When using with ELB targets works like a global load balancer assuming multiple endpoint groups
    • Uses Route 53 health checks for EC2 endpoints and Elastic IPs
  • Move endpoints without changing DNS
  • Cannot use on-prem endpoints
    • But you can set up a NLB as an endpoint and have the target of that be on-prem
  • VPC access
    • Internet gateway needs to be attached to recipient VPC
    • Put resource(s) in private subnet
      • Does not need public IP
  • Can significantly improve latency and throughput
  • Client IPs optionally preserved for ALB and EC2 endpoints
  • TCP termination on edge for ALB and EC2 endpoints
    • Highly optimized TCP configuration inside of AWS network
  • Good fit for non-HTTP use cases like Gaming, IoT, VoIP
    • Or HTTP use cases that require static IP addresses or deterministic, fast regional failover
    • Cloudfront should be used for HTTP use cases
  • Protects against DDoS
    • Uses AWS Shield
  • Custom routing
    • Can route requests to a port to a specific EC2 instance IP and port across regions
      • Only supports VPC subnet endpoints containing one or more EC2 instances
      • Configure the set of EC2 instances when creating the VPC subnet endpoint
    • Custom accelerator contains map of ports to VPC subnet IP+port
    • Matchmaker can retrieve this map to decide what port to give to a user to route them to a particular box

Glue

  • Managed ETL service and Data Catalog
    • Hourly rate, billed per second for crawlers and ETL jobs
    • Fixed monthly fee for Data Catalog
  • You can use ETL and/or Data Catalog 
  • Crawlers 
    • Discover the structure of unstructured data
      • From any data source not just S3
    • Scan data sources and populate data catalog
      • S3, RDS, Redshift, DynamoDB, DBs running on EC2, Kinesis, Kafka
  • Generates Scala or Python code for ETL jobs
    • You can add more code
    • Run ETL jobs on cluster of DPU (data processing units)
  • Glue Data Catalog
    • Hive Metastore 
    • Unified view of all of your data
    • Can be used by multiple services like Athena, EMR, Redshit Spectrum
    • Validates schemas and safeguards schema evolution
    • Encryption at rest and in transit
  • Glue Data Brew
    • Visual tool for discovering structure and cleaning data
  • Is a component of AWS Lake Formation

GuardDuty

  • Threat detection service
  • Continuously analyzes data from Cloudtrail, VPC flow logs, DNS logs for malicious behavior like
    • Unusual API calls or calls from a known malicious IP address
    • Attempts to disable CloudTrail logging
    • Unauthorized deployments
    • Compromised instances
    • Reconnaissance by would-be attackers
  • Uses AI to learn what normal behavior looks like and flag not-normal
    • Takes 7-14 days to setup baseline
  • Uses a database of known malicious parties
    • Regularly update from external feed from third parties like Proofpoint and Cloudstrike
  • Findings appear in the Guardduty dashboard and Cloudwatch events
  • Cloudwatch events can be used to eg. trigger a Lambda function
  • 30 days free, otherwise price based on quantity of data

High Availability vs. Fault Tolerance vs. Disaster Recovery

  • High Availability
    • Aims to ensure an agreed level of operational performance, usually uptime, for a higher than normal period
    • Minimizing outages vs. never having outages – can still have disruptions
    • Measured in nines, eg. 99.9% uptime
    • Fast or automatic recovery
    • For example, standby systems
  • Fault Tolerance
    • The property that enables a system to continue operating properly in the event of the failure of some of its components
    • Operating through faults
    • Active/Active systems
    • Redundancy across all components
    • Much more expensive than HA
  • Disaster Recovery
    • A set of policies, tools and procedures to enable the recovery or continuation of vital technology infrastructure and systems following a natural or human-caused disaster
    • Pre-planning for disasters
    • Process for operating after a disaster
    • Backup premesis
    • Backing up data to off-site location

Identity and Access Management – IAM

  • Accounts are a container for users, groups, roles and resources
    • Requires a credit card
    • Requires a unique email address
    • An account root user is created automatically which has full control over all resources in account
      • Should not use root user
      • Create another user and attach Administrator or Power User policy
        • Power User = Admin minus user/group management
      • Put MFA on root and any admin-level user
    • Other users created have no access initially
      • Must be granted access by root or another user with access
    • Account boundary limits blast radius of mistakes or malicious behavior
    • Can create a hierarchy of accounts using Organizations
  • Global service
    • Identity provider
    • Authenticate
    • Authorize
    • Free
    • IAM service instance dedicated to your account
    • Your account trusts IAM completely and it can act on your behalf
    • No direct control over external accounts, users or groups
    • Eventually consistent to other regions
  • Identity types
    • Users
      • Long term access eg. humans, applications, service accounts
      • Max 5,000 per account
      • Can be a member of 10 groups
        • IAM Roles and Identity Federation is the fix for this
      • New users don’t have any permissions or access keys
    • Groups
      • Container for users only
        • No groups within groups
      • Max 300 groups per account (can be raised)
      • There is no out of the box “All Users” group
      • Not a real identity
        • Cannot be referenced as a principal in a policy
        • But you can attach a policy to a group
    • Roles
      • Preferred option for giving permissions
      • You assume a role for a short period of time
        • A principal assumes the role and inherits the permissions of the role
      • Role has temporary credentials
        • Obtained using Security Token Service (see below)
      • Role has two policies and an optional permissions boundary
        • Trust policy
          • Has a Principal that defines which identities can assume the role
            • Can reference identities in the same or other accounts, AWS services, or even anonymous
          • Defines what conditions must be met for other principals to assume it
        • Permissions policy
          • What the role is capable of doing
        • Permissions boundary
          • Limits the permissions granted by permissions policies
      • When using a role, you temporarily set aside you current permissions
        • Permissions are not cumulative
        • OTOH, when using a resource policy you keep your permissions from your account
          • The resource policy permissions are added 
      • Service-linked role
        • Predefined by a service
          • One service-linked role per service that has one
            • Includes all of the permissions the service needs to call other AWS services on your behalf
        • Some services automatically create the role
          • Others ask you to create it in a setup process, or using IAM
      • Scenarios
        • Used when you need to grant an uncertain number of entities access to your resources
        • Don’t hardcode keys into code, assign roles to instances instead
        • A Lambda accesses your resources
          • Named “Execution Role”
        • An AWS service accesses your resources
          • Named “Service Role”
        • Need to grant applications running on an EC2 instance access to resources
          • Named “Service Role for EC2 Instance”
          • Can attach/detach roles to instances without restarting
        • A user in one AWS account accesses resources in your AWS account
          • Role for cross-account access
            • Role lives in resource-owning account
            • Uses a trust policy
          • Some resources support Resource Policies which can give access to other accounts without using a Role
        • An AWS user in your account needs access to resources they don’t normally have access to
          • “Break glass”
        • External identities via Federation
          • A third party “web identity” needs access
            • Google, Facebook
          • Authentication using SAML
            • Single sign-on using AD
          • You don’t get an IAM identity, but you get a role with temporary credentials
  • Policies
    • Attached to AWS identities
      • Users, Groups, Roles
        • Assign policies to groups rather than users
    • One or more statements
      • Each statement has
        • Effect
          • Allow or Deny
        • Principal
          • Identity that is allowed/denied access to resource
          • Found in trust policies and resource-based policies
          • Not present in IAM identity-based policies
          • Principals are
            • IAM users
            • Roles
            • Federated users
            • AWS service
        • Action
          • One or more API calls
          • Supports wildcards
        • Resource
          • One or more AWS resources
          • Use ARNs
            • Uniquely identifies an AWS resource
            • Format
              • arn:partition:service:region:account-id:resource-id
              • arn:partition:service:region:account-id:resource-type/resource-id
              • arn:partition:service:region:account-id:resource-type:resource-id
          • Supports wildcards
        • Condition
          • Optional
          • Specifies a boolean that must evaluate to true for the statement to be considered
    • Priorities
      • 1. Explicit Deny
      • 2. Explicit Allow
      • 3. Implicit Deny
        • Anything not explicitly allowed is denied (implicit deny)
    • Attached policies take effect immediately
      • Only attached policies have any effect
      • All policies are combined together to determine permissions
        • For example, policies attached to user, group and resource 
    • Types
      • Inline
        • Embedded in an IAM identity (user/group/role)
        • When you want to maintain a strict one-to-one relationship between policy and identity
          • For special / exceptional Allows or Denies
        • When changing must touch every identity that embeds it
      • Managed
        • Standalone objects
          • Attach to identities
        • Change policy and all identities that have it attached are changed
        • AWS managed (predefined)
        • Customer managed
          • Attached to principals in account
          • Changes to policy are seen by all principals
      • Resource-based policy
        • Attached to resources, not identities
    • Variables
      • Policy variables are placeholder in policies
      • For example
        • ${aws:username}
      • Value is substituted in before the policy is evaluated
      • Variables can be used in Resource and Condition elements
  • Access keys
    • Long term credentials (they don’t automatically update)
    • A user can have zero, one or two access keys
      • New users don’t have any access keys 
    • Can enable/disable/add/remove
    • Parts
      • Access key ID: public part
      • Secret access key: private part
  • Security Token Service (STS)
    • Global service
      • Has regional endpoints to reduce latency
    • Used to request temporary credentials via roles or user federation
    • Creating a session using STS API
      • AssumeRole API
        • For cross-account delegation or federation through custom identity broker
      • AssumeRoleWithWebIdentity
        • Federation with a web-based identity provider (OIDC)
      • AssumeRoleWithSAML
        • Federation with SAML-based identity provider (AD, OpenLDAP, etc)
      • GetFederationToken
        • Used if you want to manage permissions in your organization
        • Can be used for SSO to AWS console
          • Pass returned credentials + session token to AWS Federation endpoint
          • Endpoint returns a token that you can use to construct a URL that signs a user directly into the console without requiring a password
      • GetSessionToken
        • Used in untrusted environments (mobile, web browser)
    • All of these calls (except GetSessionToken) can be passed a session policy to limit access
    • All of these calls return temporary credentials and a session token
      • Temporary credentials are an Access Key ID and a Secret Access Key
      • Both need to be provided when making API calls
    • If your credentials expire you can request new temporary credentials
    • Federated users can come from
      • Cross-account access
        • User from another account assumes a role in your account
        • User from another account accesses a resource in your account with a resource policy allowing that user
      • SAML-based federation
        • Active Directory especially
      • Mobile apps
        • OpenID Connect (OIDC) / OAuth
        • Google / Facebook / Amazon, etc
  • Cross account role access
    • Create role of type Another AWS Account
    • Attach policy with permissions to role
    • Create policy in other account giving permission to assume the role
      • Attach to/create inline for user/group
    • Use role switching to effect the access
  • IAM Certificate Manager
    • Should only use in regions where ACM is not supported
    • Does not provision new certificates
    • Cannot use the console to manage certs, only the API
  • Query API
    • Perform actions in IAM using HTTP calls
    • Access key ID and Secret access key are required
    • For example:
https://iam.amazonaws.com/?Action=GetRole">https://iam.amazonaws.com/?Action=GetRole
&amp;RoleName=S3Access
&amp;Version=2010-05-08
&amp;AUTHPARAMS
  • AUTHPARAMS are signed credentials
    • AWS recommends using SDKs to make API calls instead because the signing logic is complex
  • Returns XML
  • Federated access to AWS console
    • Using Identity Broker
      • Write code (called identity broker) to let users that login to your network call AWS to generate a URL to login to console
      • SAML-based identity providers can also be used to log users into the console without writing code
      • Identity broker logic
        • Verify that the user is authenticated by your local identity system
        • Call the AWS Security Token Service (AWS STS) AssumeRole or GetFederationToken API operations to obtain temporary security credentials for the user
        • Call the AWS federation endpoint and supply the temporary security credentials to request a sign-in token
        • Construct a URL for the console that includes the token
        • Give the URL to the user or invoke the URL on the user’s behalf
    • Using SAML
      • No custom code is needed
      • Configuration
        • Configure your IdP as a SAML provider for AWS
          • Configure the IdP to route requests for the console to the AWS SAML endpoint
          • Generate an IdP SAML metadata document
        • Create a SAML provider in IAM
          • Create the provider and upload the metadata document
        • Configure permissions in AWS for your federated users
          • Create a role that establishes a trust relationship between AWS and the IdP
          • Trust policy has the IAM provider ARN as the principal sts:AssumeRoleWithSAML as the action
          • The permission policy should contain the permissions you want the SAML authenticated users to have
        • Configure SAML assertions in IdP
          • Install the saml-metadata.xml file that you got from AWS in your IdP
          • Configure the attributes that you want the IdP to pass in the SAML assertion

Inspector

  • Perform vulnerability scans on EC2 instances and VPCs
    • Host assessments, network assessments
    • Can be run once, or weekly

IoT Core

  • Supports billions of devices and trillions of messages
  • Applications can keep track of and communicate with devices
  • Supports HTTP, MQTT, Websockets, LoRaWAN
  • All communications uses TLS
  • Clients must use strong authentication
  • Rules engine
    • Filters and transforms incoming data
    • Routes data to DynamoDB, Kinesis, Lambda, SNS. SQS, Cloudwatch, Elasticsearch
  • Registry of devices
  • Device gateway
    • Pub/sub messaging between devices and applications
    • Scales automatically with usage

Kinesis

  • Basically Kafka
  • Public service
    • Create VPC endpoints to access without using public network
  • Message max size 1 MB
  • Each record has a partition key used to map record to shard
  • For large scale data ingestion with multiple concurrent consumers
  • Uses DynamoDB table per application to track shard leases
  • Family of products
    • Kinesis Data Streams
      • 1 MB/sec/shard ingest capacity
        • Or 1000 records/sec/shard
        • If this limit is exceed the SDK will throw an exception
          • To avoid increase number of shards (see scaling below)
      • 2 MB/second/shard output output across all consumers
        • Enhanced fan-out improves that to 2MB/sec/shard per consumer
      • 5 read transactions per second per shard
        • Each read transaction can provide up to 10,000 records with an upper quota of 10 MB per transaction.
      • Real time 
        • ~200ms latency for classic
        • ~70ms for enhanced fan-out
          • Limit of 20 consumers
      • Retention
        • 24 hour rolling window – default
        • 7 days extended
        • One year long term
      • For custom applications
        • Must develop producers and consumers
      • Adapters for Cloudwatch and Spark
      • Requires manual scaling and provisioning
        • “Auto” scaling using Cloudwatch and Lambda
          • Choose scaling threshold (for example 80% = 800 records/sec/shard)
          • Create Cloudwatch alarm on IncomingRecords.Sum metric
            • Set interval to 5 minutes = 300 seconds
            • Set threshold to 300 * 800 * numShards
            • If the alarm is fired send SNS to Lambda
          • Create Lambda that is invoked when the SNS is sent
            • Call Kinesis UpdateShardCount API doubling the number of shards
            • Update the Cloudwatch threshold with the new numShards value
      • Optionally encrypts on ingest
      • Supports replay
      • Troubleshooting
        • Shard iterator expires immediately before you can use
          • Probably means that the DynamoDB shard tables doesn’t have enough write capacity
        • Consumer record processing falling behind
          • Inefficient application code or blocking application code
          • Metrics to check
            • GetRecords.IteratorAgeMilliseconds
            • MillisBehindLatest
            • RecordProcessor.processRecords.{Time, Success, RecordsProcessed}
    • Kinesis Firehose
      • Fully managed data delivery service
      • Plug and play with AWS services
      • Automatic scaling
      • Can automatically encrypt on ingest
      • Sources 
        • Kinesis Data Streams
          • One use case is to persist all data flowing through a Kinesis Data Stream to S3
        • Kinesis Agent
        • Kinesis Firehose API
        • Cloudwatch Logs
        • CloudWatch Events
        • AWS IoT 
      • Near real-time
        • Approximately 60 second latency
        • Buffers data up to 1 MB or 60 seconds and delivers the buffer to the destination
      • Can transform data using Lambda
        • But Lambda isn’t a destination for Firehose
        • Source records (pre-transform) can be logged to S3 bucket
      • Data transfer to load data into 
        • Redshift
          • For Redshift, buffer is stored to S3 and then Redshift COPY command pulls it into the database
        • S3
        • ElasticSearch
        • Splunk
      • No replay
    • Kinesis Data Analytics
      • Real time processing
        • Firehose source is near real time
      • Fully managed
      • Scales automatically
      • SQL processing of streaming data (streaming ETL)
      • Sources
        • Kinesis Data Streams
        • Firehose
        • Can also join reference data from S3
      • Destinations
        • Kinesis Data Streams 
        • Firehose
          • Near real time
        • Lambda
      • Kinesis Data Analytics Application
        • Takes stream(s) as input
          • Plus optional reference data
        • Runs SQL query against input
        • Can also write Java-based applications
        • Outputs stream(s)
          • There can also be an error output stream
      • Use cases
        • Streaming data needing real-time SQL processing
        • Time series analytics
        • Real time dashboards
        • Leaderboards for games
        • Real-time metrics for security and response teams
      • Uses Flink
    • Kinesis Agent
      • Standalone Java client that reads from files and writes to Kinesis Streams or Firehose
    • Kinesis Video Streams
      • Ingest, store, encrypt and index video
      • Stream directly from devices to Kinesis

Key Management Service – KMS

  • Regional, public service
  • Create and control encryption keys
  • Keys never leave KMS
    • Provides FIPS 140-2 level 2
  • Customer Managed Key (CMK)
    • Logical “container” for key
    • Backed by physical key material
      • Import your own key material or
      • Have key material generated and used in an AWS CloudHSM cluster as part of custom key store in KMS
      • Previous backing key material (due to rotation) are also stored with CMK
    • ID, date, policy, description and state
    • Can be used to encrypt/decrypt a maximum of 4KB of data
      • When encrypting, the cipher text contains a reference to the CMK, so you don’t need to specify the CMK when decrypting
    • Key is encrypted before stored on disk
    • CMK never leaves KMS
    • CMK lives in a specific region
    • Optionally supports rotation (once per year)
      • Keeps the old key material around so you can decrypt things encrypted with the older version
    • Alias
      • Shortcut to a CMK
      • Application can be configured with the alias name but the CMK the alias points to can change
    • Costs until you delete it
      • For keys that get rotated each new version costs extra
  • Data Encryption Key (DEK)
    • Used to encrypt/decrypt data > 4KB
    • Two versions generated by KMS
      • Plaintext version
        • Use to encrypt data – then discard key
      • Ciphertext version (encrypted using CMK)
        • Store this with the data
    • Called Envelope Encryption
  • AWS Managed Keys
    • Keys in your account
    • Created, managed and used on your behalf
    • You can view the keys and their policies and audit their use in CloudTrail
    • You cannot make any changes to them
    • Cannot use them in crypto operations
    • Rotated every 3 years
  • AWS Owned Keys
    • Not in your account
    • AWS services use these to protect your data
    • You have no access to these
  • Key policy
    • CMK has exactly one Key Policy
      • Resource policy
    • To allow IAM policies to control access to a key, the key policy must have a statement that gives the AWS account full access to the key
"Effect": "Allow",
"Principal": {
  "AWS": "arn:aws:iam::111122223333:root"
},
"Action": "kms:*",
"Resource": "*"
  • Define key administrator(s) and user(s)
  • Default key policy
    • Enables IAM policies
    • Gives account root user full access to key
      • This includes users/groups with AdministratorAccess
    • Allows key administrators to administer key
      • Key administrators entered in console as part of creating key
    • Allows key users to use key
      • For cryptographic operations
      • With AWS services
      • Key users entered in console as part of creating key
  • IAM policy
    • If IAM policies are enabled by a key policy then identity policies can reference the key
      • You do not need to list the principal in the key policy
  • Endpoints
    • Local VPC access to KMS instead of over the internet
    • Can define endpoint policies to restrict
      • Who can access the endpoint
      • Which API calls they can make
      • Which resources they can access
    • Can define key policy to restrict access to key to over endpoint only
      • Condition 
        • aws:sourceVpce equals VPC endpoint ID
  • Grants
    • An alternative to a key policy
    • You can use grants to give long-term access to your CMKs to other principals
    • Each grant covers one key only
    • Has at least one grantee principal
      • Can be in a different account
    • Commonly used by services
      • Service creates grant on behalf of user, uses the permissions, then discards the grant
    • Grantees can use the grant without specifying it
      • Although it takes < 5 minutes for the grant to be visible everywhere
        • In the meantime you can use a grant token (returned by the create grant API) explicitly to use grant
    • Grants can allow a subset of operations
      • View the key
      • Cryptographic operations
      • Create / retire grants
    • Grant constraint
      • Grant usage comes with an Encryption Context
        • Constraint can only allow usage if certain key-value pairs matched
    • Granting CreateGrant permission
      • Grantee principal can only create grants with a subset of permissions it was granted
      • Constraints must be at least as restrictive as in original grant
  • Custom key store
    • Use CloudHSM to manage key material
    • You can configure KMS to use CloudHSM as a key store
      • Choose which to use when creating a CMK
    • Keys never leave the CloudHSM
    • Operations done in CloudHSM
    • When to use
      • You might have keys that are explicitly required to be protected in a single-tenant HSM or in an HSM over which you have direct control.
      • You might have keys that are required to be stored in an HSM that has been validated to FIPS 140-2 level 3 overall (the HSMs used in the standard AWS KMS key store are either validated or in the process of being validated to level 2 with level 3 in multiple categories).
      • You might need the ability to immediately remove key material from AWS KMS and to prove you have done so by independent means.
      • You might have a requirement to be able to audit all use of your keys independently of AWS KMS or AWS CloudTrail.
    • Considerations
      • CloudHSM costs at least $1,000 per month regardless of whether you make requests
      • The number of HSMs determines rate at which keys can be used
      • Configuration errors could cause keys to be unusable
        • Need to make CloudHSM HA by setting up a cluster
      • Operationally complex
  • KMS vs. CloudHSM
    • KMS
      • Shared tenancy of hardware
      • AWS has access to hardware
      • FIPS 140-2 Level 2 overall compliance
      • Controlled using IAM mechanisms (policies, etc)
      • Automatic key rotation
      • Automatic key generation
      • Integrated with AWS services
      • Cannot do SSL/TLS offload
    • CloudHSM
      • Dedicated, single-tenant physical hardware device
      • Runs in one AZ
        • Need to create a cluster to make HA
        • HSM replicates data between cluster members
        • Elastic network adapters injected into VPC
      • AWS provisions, fully customer managed
        • AWS has no access to secure area where key material is stored
        • AWS updates the firmware
        • AWS backs up the HSM cluster daily
          • Encrypted and stored in S3
          • Backed up
            • Users
            • Key material and certificates
            • Hardware security module (HSM) configuration and policies
      • FIPS 140-2 Level 3 overall compliance
      • Cloud HSM client installed on EC2 instances
      • Controlled using industry standard APIs
        • PKCS#11
        • Java Cryptography Extensions (JCE)
        • Microsoft CryptoNG (CNG)
      • Full control of underlying hardware
      • Full control of users, groups, keys, etc
      • No automatic key rotation
      • No integration with AWS services (eg. S3 SSE)
      • Can offload SSL/TLS processing for web services
      • Can use for Oracle Transparent Data Encryption (TDE)
      • Can use to protect private keys for issuing Certificate Authority (CA)

Lambda

  • Serverless functions
    • 10 GB RAM max
    • 15 minutes max
      • If a task runs in less than 15 minutes default to using Lambda for the exam
    • 1000 concurrent executions per account per region
      • Can ask to have this raised (but not by much)
      • Will get Rate exceeded or 429 “TooManyRequestsException” errors if exceeding limit
    • Deployment package
      • 50 MB zipped, 250 MB unzipped
    • Free tier
      • 1M requests per month
      • 400,000 GB-seconds (RAM)
  • Only billed for the duration of run
  • Great way to “add features to AWS”
  • Uses a language runtime
    • For example, Python 3.8
    • Comes with a small amount of temporary disk space 
  • Lambda@edge
    • Functions that run as part of Cloudfront
      • See Cloudfront for details
  • Permissions
    • Execution role
      • Gives function access to AWS services and resources
    • Resource policy
      • Grants permissions to AWS accounts / services to invoke or manage function
      • Can only be changed using CLI or API
    • Identity-based policies
      • Managed permissions granted to users/groups
        • AWSLambda_FullAccess
        • AWSLambda_ReadOnly
        • AWSLambdaRole (invoke only)
    • Can scope any of these to a set of Lambdas
    • Can set a permissions boundary on a set of Lambdas
  • Environment variables
    • Can configure using console, CLI or API
    • Some predefined variables
      • AWS_ACCESS_KEY_ID
        AWS_SECRET_ACCESS_KEY
        AWS_SESSION_TOKEN
    • Securing environment variables
      • AWS recommends using secrets manager instead of environment variables
      • At rest encryption
        • Uses AWS managed key by default 
          • Free
        • Can use Customer managed key instead
      • Security in transit
        • Enable helpers for encryption in transit
        • Copy policy and add to execution role
        • Copy decryption code snippet and add to function to decrypt environment variables in the function
  • Can run inside or outside a VPC
    • By default runs in public zone
      • Access to Dynamo DB, S3, SQS, SNS, etc
      • Public internet
    • Can be configured to be a “VPC Lambda”
      • Only needed to access a resource in a VPC
        • RDS, Elasticache, etc
        • APIs provided by apps running on EC2
      • Lambda still runs in its own “Lambda VPC”
      • Must connect Lambda VPC to account VPC
        • Creates a network interface in VPC
          • Called Hyperplane ENI
          • Acts as a NAT from the Lambda VPC to the account VPC
        • Multiple functions can share the same Hyperplane ENI
          • If the functions use the same subnet and security group
        • If Lambda cannot share the ENI it will create one ENI + allocate one subnet IP address per invocation
          • You can run out of ENIs or addresses and get one of these errors
            • EC2ThrottledException
            • Client.NetworkInterfaceLimitExceeded
            • Client.RequestLimitExceeded
              • Went over creation limit of ENIs per second
          • You should spread your Lambda over as many VPC subnets as possible to mitigate
            • At least one subnet per AZ
      • Hyperplane ENI lifecycle
        • Creation (for a new subnet/security group pair) takes several minutes
        • When changing a function VPC configuration
          • Any existing invocations continue to use the old ENI
        • If the Lambda remains idle for consecutive weeks
          • Lambda reclaims the ENI and sets function state to Idle
        • Each Hyperplane ENI supports 65,000 connections/ports
          • If more are needed Lambda creates an additional ENI
        • When you remove the VPC configuration from a function
          • If no more functions reference the ENI
            • It is deleted
            • Takes up to 20 minutes 
        • There is an “ENI Finder” tool that you can run to identify all Hyperplane ENIs in use in your account
      • ExecutionRole must have AWSLambdaVPCAccessExecutionRole permission
      • If internet access is needed VPC must have a NAT GW + IGW
      • If access to public services needed (S3, DynamoDB)
        • Use gateway endpoint
    • Otherwise if running outside outside a VPC you have full access to the internet
      • Would need to use S3 or DynamoDB for storage
  • Logging
    • Logs from Lambda executions go into Cloudwatch Logs
      • Requires AWSLambdaBasicExecutionRole in Execution Role
    • Metrics stored in CloudWatch
    • X-ray integration for distributed tracing
  • Invocation
    • Synchronous
      • CLI/API invokes and waits for a response
      • API Gateway proxies web request/response
      • Errors or retries have to be handled in the client
    • Asynchronous
      • Typically used when AWS services invoke lambda functions
        • For example, S3 event is generated
      • If processing fails, Lambda will retry between 0 and 2 times
        • If max retries reached, event can be sent to Dead Letter Queue
        • Lambda supports Destinations where successful or failed events can be sent
          • SQS, SNS, Lambda, EventBridge
      • Lambda function needs to be idempotent to support retries
    • Event Source Mapping
      • Used for streams or queues that don’t support event generation to invoke Lambda
        • Kinesis, DynamoDB streams, SQS
      • Kinesis
        • Lambda processes one batch at a time from each shard
        • You can map a Lambda to a stream or to a consumer of a stream
          • Stream iterator
            • Lambda polls each shard at a base rate of once per second
            • Keeps processing batches until it is caught up with the stream
            • Shares read throughput with other consumers of the shard
          • Consumer
            • Use a consumer with enhanced fan-out to get a dedicated connection to a shard
            • Records are pushed to the Lambda
            • Minimizes latency and maximizes read throughput
      • Event Source Mapping reads/polls source and sends event batches to Lambda
        • Batches either succeed or fail as a whole batch 
      • Uses permissions from Execution Role to read from source
      • SQS / SNS can be used for any failed batches
    • Common triggers
      • Manually
      • Scheduled (EventBridge)
      • S3
      • Kinesis
      • API Gateway
      • Eventbridge
        • Any API call can kick off Eventbridge rule
  • Versions
    • Lambda functions can have multiple versions
    • Code + configuration of lambda
    • Each version is immutable and has its own ARN
    • $Latest points at the latest version
    • Aliases (DEV, STAGE, PROD) point at a specific version and can be changed
  • Execution Context
    • The environment a lambda function runs in
    • A cold start is a full creation and configuration of execution context
      • Approximately 100ms
    • A warm start uses a pre-existing execution context
      • Approximately 1-2ms
      • Pre-existing context could have /tmp data populated or even variables in the code pre-initialized
    • Can’t assume a warm context is available
    • A spike of concurrent executions could create multiple new contexts
    • Provisioned concurrency
      • AWS will create and keep X contexts warm and ready to use
  • Reserved concurrency
    • Guarantees the minimum number of concurrent instances for a function
    • No other function can use that concurrency
    • Also limits the concurrency so your function can’t scale out of control
    • No charge
  • Provisioned concurrency
    • Pre-initializes a number of execution environments
    • To avoid cold starts
    • Counts towards Reserved concurrency
    • Integrates with Application Autoscaling to manage provisioned concurrency based on a schedule or utilization
    • Costs 
  • Step functions
    • A Lambda function should do one thing well
    • Lambda functions can be chained together to bypass 15 minute rule
      • But because there is no state shared between invocations this isn’t that useful – not scalable
    • Step functions are used to create state machines
    • Standard and Express workflow types
      • Standard:  can run for up to one year
      • Express
        • Designed for high volume workloads, streaming, etc.
        • Can run for up to five minutes
    • Can be triggered by
      • API gateway, IOT rules, EventBridge, Lambda, manual, etc
    • Amazon States Language (ASL)
      • Definition language
    • IAM role used for permissions
    • State types
      • SUCCEED / FAIL – termination state
      • WAIT – waits for a duration or until a specific time
      • CHOICE – change state based on input
      • PARALLEL – choose branch based on input
      • MAP – accepts a list of things, does an action on each item 
      • TASK – performs a unit of work via
        • Lambda, Batch, DynamoDB, ECS, SNS, SQS, Glue, other Step Functions

Macie 

  • Data Security and Data Privacy service
  • Discover, monitor and protect data stored in S3 buckets
  • Used for HIPAA and GDPR compliance, preventing identity theft
  • Automated discovery of PII, PHI, financial data, keys, etc
  • Create discovery jobs
    • Scheduled
    • Managed data identifiers built-in using AI
    • You can create custom data identifiers using regular expressions
      • Plus Keywords, Maximum match distance and Ignore words
    • Findings are generated
      • Policy findings
        • Bucket configuration like cross-account access, public access, encryption, etc
      • Sensitive data findings
        • Credentials, Custom identifier, Financial, Personal, Multiple
      • Sent as events to EventBridge
  • Macie alerts can be sent to Eventbridge and on-prem event management systems
  • Integrates with Security Hub
  • Centrally manage, either via AWS Organizations or one Macie AWS account inviting others
  • Automate remediation actions using eg. Lambda

Migration Evaluator

  • Install agents in your network
  • Analyzes compute footprint and utilization
  • Send data to AWS
  • It creates an assessment that recommends how to migrate to AWS cost-effectively

Migration Hub

  • Tracks progress of migration of servers/databases to AWS
    • “Single pane of glass”
  • Moving VMs to AWS using Server Migration Service
    • Creates EBS volumes ultimately for the new EC2 instances
  • Moving Databases to AWS using Database Migration Service
    • Schema migration tool
    • Move to RDS or Aurora

Message Queue – MQ

  • Fully managed ActiveMQ or RabbitMQ
  • Supports standard APIs like JMS, NMS, AMQP, STOMP, MQTT, Websocket
  • Integrated with 
    • CloudWatch – metrics
    • CloudWatch Logs – broker logs
    • CloudTrail – log API calls
    • CloudFormation – IAC for brokers
    • IAM – authz for API
    • KMS – encryption keys
  • Use when you are already using ActiveMQ or RabbitMQ
    • No need to rewrite code
  • Supports AWS KMS keys, AWS managed keys, or Customer managed keys
  • Storage
    • EFS – High durability and replication across multiple AZs
    • EBS – High throughput
  • Supports ActiveMQ network of brokers

Neptune

  • Graph database

OpsWorks

  • Managed Puppet or Chef 
  • Can manage AWS or on-prem servers

Organizations

  • Free
  • Manage accounts and a hierarchy of organization units
  • Management account 
    • Ultimate owner of the organization
    • Has the payment method for the organization
      • “Consolidated Billing”
      • Also consolidates reservations and volume discounts
    • Top level account used to create/remove other accounts, invite accounts to join organization
    • Can apply policies to accounts or organizational units
    • Lives in the Organization root OU
  • Member account
    • Part of the organization
    • Can live in an organizational unit
    • A good way to isolating resources and workloads
      • For example, Dev, Staging, Prod
      • Reduces blast radius
    • Management account must invite member to org if the account already exists
      • Otherwise the account can be created directly within the org
  • Organizational Unit
    • A group of AWS accounts
    • Can contain other OUs
    • Organizational Root is the top level OU
    • OUs inherit the policies attached to parent OUs
  • Role Switching
    • Accessing other accounts
    • When you create an account in the organization
      • AWS creates a role called OrganizationAccountAccessRole in the account
        • Has full administrative access in the created account
        • Scoped to all principals in the Management account
    • If account was invited to organization then you need to create a role inside the invited account
      • Named “OrganizationAccountAccessRole” by convention
      • Type of trusted entity: Another AWS account
      • Account: Management account
      • Use Red for Production, Yellow for Development
    • To switch using the AWS console
      • Select Switch Roles from the account menu
      • Enter the account number of the account to switch to
  • Policies
    • Can apply policies at the OU level and they get inherited by everything within the OU
  • Service Control Policies
    • Control permissions globally
    • SCPs are account permission boundaries
      • They do not grant permissions
      • They specify the maximum permissions allowed for the member accounts
      • Only actions that are both allowed by the SCP and granted by an identity policy are allowed for that identity
    • Can apply to organization, OUs or individual accounts
      • Inherited down tree
    • The only way to control the root user of member accounts
      • Cannot control the management account or its root user
    • Resource must always be “*”
    • Strategies
      • Deny list
        • All actions are allowed by default, but specific actions/services can be prohibited
        • This is the least administrative overhead because as AWS adds new features they are allowed without needing to do anything
        • This is the default when you first enable SCPs in your organization
      • Allow list
        • All actions are prohibited by default, but specific actions/services can be allowed
        • High administrative overhead but more secure
  • Logs
    • Can configure CloudTrail to put all logs in a single account
    • Use SCP to control access to the logs
  • Sharing Reserved Instances
    • Reserved instances can be shared across accounts for billing purposes
    • Create one billing account for the RIs
  • Moving accounts between organizations
    • Remove account from old organization
      • If it is the management account, delete the old organization
    • Invite account to new organization
    • Accept the invite

Parameter Store

  • Part of AWS Systems Manager
  • Hierarchical storage for configuration data
    • /wordpress/DBUser
  • Integrated with many AWS services
    • Cloudformation, Lambda, EC2, etc
  • Use CLI or APIs to access AWS Public Area endpoints
  • Store as plain text or encrypted data
    • Integrates with KMS
      • Default key alias/aws/ssm
        • The default key has Decrypt permission for all IAM principals within the AWS account
  • Value types
    • String
    • StringList
    • SecureString
  • Can version parameters
  • Integrated with IAM for security
  • Public parameters
    • AWS provided values like AMI references
  • Changes can create events
  • Limited to 10,000 up to 4KB parameters in free tier
    • Advanced tier allows unlimited parameters up to 8KB
  • No rotation or password generation
  • Free

Quicksight

  • Business Intelligence visualization and ad-hoc analysis
    • Like Tableau
  • Access using browser or mobile devices
  • Data sources
    • Redshift
    • RDS, Aurora
    • S3 
    • Athena
    • Upload spreadsheets
    • Connect to on-premises databases and SaaS apps like Salesforce
  • Automatically discovers data sources from AWS services
  • Prepare / transform to clean up data

RAM – Resource Access Manager

  • Share AWS resources with other accounts in your AWS Organization, for example
    • Transit gateways
    • VPC subnets
      • aka Shared VPCs
    • License manager
    • Route 53 resolvers
    • Dedicated hosts
  • Prevents having to duplicate resources
  • Must enable in Organizations
  • RAM vs VPC peering
    • Sharing within region?  Use RAM
    • Across regions?  Use VPC peering

Relational Database Service – RDS

  • For OLTP workloads
  • Managed service
    • Security patching
    • Software updates
    • Automated backups
    • Easy scaling for storage and compute
  • Types
    • SQL Server, Postgres, MariaDB, Oracle, Aurora
  • RDS instance
    • Accessed using CNAME
    • Contains one or more databases
    • Types
      • General purpose (m)
      • Memory optimized (r)
      • Bustable (t)
    • Storage is allocated with the instance
      • gp2 (default), io1 or Magnetic
    • Billed 
      • Hourly for compute
      • GB/m for storage
    • Security group
      • Controls access to the instance
    • Subnet group
      • Include all of the subnets that RDS can use to place the instance
      • Only one will be used for the primary (you choose the AZ)
      • If Multi-AZ is used another subnet will be used for that
    • Can make publicly accessible
      • There is an option that you can enable when creating the instance
      • It will need a security group that allows access from the public IP address of your application (or NAT)
  • Scaling options
    • These scaling options are complementary with each other
    • Scale up
      • You can increase the storage and change the storage type
        • For all DBs except SQL Server
          • You need to create a new instance from a snapshot
      • You can change the instance type to a larger one
        • Will cause downtime
    • Multi-AZ
      • For high availability (failover)
        • NOT for scaling performance
      • Within a single region
      • No free tier
      • Extra cost for standby
      • Standby instance in another AZ
        • Cannot be used for queries
        • CNAME points at primary – cannot even access standby
      • Automatic failover to standby
        • CNAME switched to standby
        • 60-120 seconds to occur
        • Causes
          • AZ outage
          • Primary failure
          • Manual failover
            • Choose Reboot from Console/CLI/API selecting optional Failover
          • Instance type change
          • Software patching
      • Synchronous replication
      • Backups done from standby
      • Upgrades
        • Primary and standby are upgraded at the same time
      • Use this in all Production and Prod-like environments
    • Multi-region
      • For disaster recovery and local performance
      • Asynchronous replication
      • All regions are accessible for reads
        • But only one for writes
      • Automated backups can be taken in each region
      • Each region can have a multi-AZ deployment
      • Upgrades
        • Version upgrade is independent in each region
    • Read replicas
      • Used for scaling read performance
      • Asynchronous replication
      • Same AZ, cross-AZ, cross-region supported
        • Cross-region not supported for SQL Server
      • Multiple read replicas supported
        • Up to 5 per instance for MariaDB, Oracle and Postgres
        • You could potentially replicate from a read replica to another, but the lag could get large
      • Upgrades
        • Read replica upgrades are independent from primary
      • Promotion
        • Read replica can be promoted to standalone
        • Near zero RPO, low RTO
        • Watch out for data corruption getting replicated
          • Malware for example
        • Retains the DB parameter group and backup settings
      • Multi-AZ Read Replicas
        • When you create a read replica you can configure it as a Multi-AZ instance
        • RDS will create primary and standby instances, with the standby in a separate AZ
        • The primary synchronously replicates to the standby
      • If you delete a DB instance that has read replicas
        • Read replicas in the same region are promoted to a standalone DB instance
          • Same with cross-region read replicas for MySQL and Oracle
          • You have to delete the promoted instances if you want them gone
        • For Postgres, when the source DB instance for a cross-Region read replica is deleted, the replication status of the read replica is set to terminated
          • The read replica isn’t promoted. 
          • You have to promote the read replica manually or delete it.
  • Backup and Restore
    • RPO: Recovery Point Objective
      • Time between last backup and the incident
      • Amount of maximum data loss
      • Generally lower values cost more
    • RTO: Recovery Time Objective
      • Time between failure and full recovery
      • Influenced by process, staff, tech and documentation
      • Generally lower values cost more
    • Types of backups
      • Manual snapshots
        • Scope is the entire RDS instance
          • All databases
        • Stored in AWS managed bucket
          • You do not have access to this bucket
            • But you can export a snapshot to a bucket you do manage
        • First snap is full, then incremental
        • Size of consumed data
        • Done against standby in Multi-AZ setup
        • Snapshots do not expire, even if instance is deleted
        • You can share snapshots with other accounts in the same region
      • Automatic backups
        • Snapshots that occur automatically
        • Occur during backup windows
        • No charge for the storage used by automatic backups
        • Every 5 minutes transaction logs are stored to S3
        • Retained for 0 to 35 days (7 by default)
          • Even when deleting database
          • So it’s important to take a manual snapshot at delete time
    • Restores
      • Creates a new RDS instance with a new CNAME
      • Restores of automated backup snapshots leverage the transaction log dumps to get a 5 minute RPO
        • Backup is restored and transaction logs are replayed from backup snapshot time forward
        • You cannot do this type of point-in-time restore with a manual snapshot, it only works with automatic backup snapshots
      • Restoring is slow – think about RTO
        • Better to failover to multi-AZ replica or promote a read-replica
      • The default parameter group and security group are used for the new instance
        • Plan ahead and back these up along with the data 
  • IAM database authentication
    • Works for MariaDB/MySQL and Postgres only
    • Centrally manage access to multiple databases 
    • Create database user that uses token

CREATE USER jane_doe IDENTIFIED WITH AWSAuthenticationPlugin as ‘RDS’;

  • User/role must have policy attached that allows access to a specific DB+user
    • Multiple IAM users/roles can have the same policy attached
  • Then user/role calls API with DB+user to get a token

TOKEN="$(aws rds generate-db-auth-token --hostname $RDSHOST --port 3306 --region us-west-2 --username jane_doe )"

  • Each token has a lifetime of 15 minutes
  • Application uses DB username and token to authenticate

mysql --host=$RDSHOST --port=3306 --user=jane_doe --password=$TOKEN

  • For MySQL they recommend this only for temporary, personal access and only for workloads that can be easily retried
  • Forces SSL client connections
  • Windows integration authentication
    • For SQL Server only
    • Need to use AWS Managed AD
    • SQL Server joined to the trusting domain
  • Storage
    • Increasing storage
      • Usually doesn’t impact performance
      • Must increase by at least 10%
  • Storage autoscaling
    • If enabled
      • When free available space is less than 10% of allocated space 
      • For more than 5 minutes
      • And more than six hours has passed since the last autoscaling
    • More storage is added, whichever is greater:
      • 5 GB
      • 10% of allocated storage
      • Storage growth prediction for 7 hours based on FreeStorageSpace metric
    • You can set a maximum storage threshold to prevent autoscaling from automatically adding too much storage
      • Default is 1000GB
  • Enhanced monitoring
    • Metrics
      • General metadata
      • CPU utilization
      • Load average
      • Disk I/O
      • Filesystem utilization
      • Memory utilization
      • Network utilization
      • Processes
      • Swap space
  • Event Notification
    • You can subscribe to events generated by RDS using SNS
      • You can also use EventBridge to get the events
    • Using the API or CLI you can query the events for the last 14 days
      • Or using the console for the last 1 day
  • Upgrades to the database require downtime
    • Even for multi-AZ deployments
      • Primary and standby both upgraded
  • Parameter changes require reboot
  • OS upgrades/patching
    • Made to the standby
    • Standby promoted 
      • Reboot primary
    • Old primary becomes standby
  • Encryption
    • SSL/TLS in transit
      • RDS configures all instances with certificates
      • Clients must download the regional RDS CA bundle from AWS and install it in the OS certificate store and potentially Java carets
      • SQL server 
        • Set parameter rds.force_ssl=true to force all client connections to be SSL
          • Reboot instance to take effect
        • Append encrypt=true to connection strings
      • MySQL / MariaDB
        • mysql –ssl-ca=/path/to/CA-bundle.pem – -ssl-mode=VERIFY_IDENTITY
        • You can require SSL for specific DB users
          • ALTER USER ‘ssl_user’@‘%’ REQUIRE SSL;
      • Postgres
        • Set parameter rds.force_ssl=true to force all client connections to be SSL
          • Reboot instance to take effect
        • Add ssl_mode=verify-ca to connection strings
    • Can encrypt at rest using KMS keys
      • CMK generates data keys and they are stored on host
      • Handled at the host/EBS level
    • If database is encrypted so are
      • Snapshots
      • Backups
      • Read replicas
    • If replica is in another region it will use the key for that region
    • Cannot disable encryption once enabled
    • Cannot encrypt an existing database
      • Create a snapshot 
      • Copy it
      • Encrypt the copy
      • Create a DB from the encrypted snapshot
    • RDS MSSQL and Oracle support Transparent Data Encryption (TDE)
      • Encryption handled in DB engine
        • AWS never sees unencrypted data
      • RDS Oracle supports Cloud HSM
        • Much stronger key controls
  • Parallel Query
    • Push down query across thousands of CPUs
    • For analytical workloads requiring fresh data
    • Can speed up queries up to two orders of magnitude
    • Not all queries benefit
    • If enabled, query optimizer will automatically use it
      • Could be higher costs since it won’t use the buffer pool
        • So don’t always turn it on.  Test workloads to see if there is a benefit
  • RDS Proxy
    • Connection pool service
    • Highly available over multiple AZs
    • Can queue or throttle incoming connections so the database doesn’t get overwhelmed
    • Automatically handles failover
      • You don’t need to repoint your application
  • Endpoints
    • If the RDS instance is in another VPC you can create an endpoint in your VPC to access it
      • Or you can create one in the same VPC to provide more control over the access 
    • Fine grained access control using endpoint policy
    • Read/write endpoints and read-only endpoints
  • Invoking Lambda from the database
    • Setup
      • Create role granting RDS access to Lambda
      • Allow outbound connections to Lambda
      • Grant access to Lambda to DB user
    • Invoking
      • MySQL – call one of
        • lambda_sync(function_arn, json_payload)
        • lambda_async(function_arn, json_payload)
      • There are similar functions for Postgres
      • They recommend creating a stored procedure that calls lambda_async/sync
      • For Change Data Capture scenarios you can create a trigger that calls the stored procedure
        • The lambda can then write to a Kinesis data stream
    • Limitations
      • Invoking a lambda has a latency of tens of milliseconds, even if calling the async version
      • Kinesis Firehose is limited to 5,000 records per second 
  • Aurora
    • Lives in VPC
    • Based on a cluster
    • Can act like MySQL or Postgres
      • Five times faster than MySQL
      • Three times faster than Postgres
    • Database restart in under 60 seconds
      • Buffer cache is stored out of the database process
      • No need to refill when restarting after a crash
    • SSL is used between server and application
    • Encryption works the same as RDS
    • Single primary instance + 0 or more replicas
      • Replicas can be used for both HA and reading
      • Only the primary DB replica can write but the rest can be used for reading
    • Upgrades
      • MySQL version can do automated in-place major version upgrades
      • You can enable automated minor upgrades
    • Storage
      • No local storage – uses cluster volume
        • Faster provisioning
        • Improved availability
        • Better performance
      • Automatically grows from 10GB up to 128TB in 10GB increments
      • All SSD based – high IOPS, low latency
      • Storage is billed on what’s used – high water mark
        • Storage spaced which is freed up is reused by new data
          • If you free up space and don’t reuse you (currently) have to create a new Aurora instance to get a lower high water mark
        • Replicas can be added and removed without storage provisioning
      • If a section of storage is corrupted it is repaired automatically using the data from the other replicas
        • No need to restore from backup, do a failover, etc
    • Costs
      • Aurora doesn’t support micro instances – no free tier
      • Beyond RDS single AZ Aurora provides better value
      • Compute
        • Hourly charge, billed per second, 10 min minimum
      • Storage
        • GB-month consumed (high water mark), IO cost per request
        • 100% of DB size in backup storage is included in storage price
    • Endpoints
      • Cluster endpoint
        • Points at writer replica and can be used for read and write
        • Same endpoint can be used after failover
      • Reader endpoints
        • Load balance across all read replicas
      • Custom endpoints
        • Can include any subset of the replicas
    • Backup and Restore
      • Work similarly to regular RDS
      • Restores create a new cluster
      • Backtracking
        • A way to rewind an Aurora database to a particular time for queries, or for restoring to a point before corruption occurred
        • Must configure window for how far back to support backtracking
        • Aurora saves extra data during regular operation to support
      • Fast clones
        • Creates a new database much faster than copying all the data
          • Uses copy-on-write
    • Two copies of data are contained in each AZ with minimum of three AZs. 2×3 = 6 copies.
      • The extra copies do not cost extra
      • Each AZ contains a copy of the cluster volume data
      • Can lose up to two replicas without affecting write availability
      • Can lose up to three replicas without affecting read availability
    • Aurora Replica types
      • Aurora
        • Asynchronous replication
        • Up to 15 replicas share the same underlying volume in the same region
          • Replication lag in 10s of milliseconds
        • Low impact on primary
        • Will automatically failover with no data loss
          • Flips DNS CNAME
          • Available within 30 seconds
          • Cluster endpoint and reader endpoints will automatically adjust
        • For high availability and AZ outages
      • MySQL / Postgres
        • Asynchronous replication
        • Up to 5 replicas in separate regions
          • Replication lag in seconds
        • High impact on primary
        • Manual failover with potentially minutes of data loss
        • Can define subset of data to replicate
      • Global Database
        • Physical replication done at storage layer
          • Latency under a second
          • No impact on primary performance
        • Replicate up to 5 secondary regional clusters
        • For low-latency global reads and disaster recovery from regional outages
          • Can add up to 16 read replicas in each secondary region
        • Manual failover
          • Secondary region can be promoted in less than a minute
          • Data loss: seconds
          • Will need to point application to new primary region
      • If you don’t have a Aurora replica and it needs to failover Aurora will create a new DB instance in the same AZ
        • May not succeed if AZ is failing
      • Automated backups turned on by default
        • For point-in-time recovery from 5 minutes in the past to any time
      • You can also take snapshots
        • No performance impact
        • You can share Aurora snapshots with other AWS accounts
          • Eg. get a copy of prod data in Dev
        • You can choose to make a snapshot when deleting an Aurora DB
          • Only snapshots retained after DB deleted
    • Aurora Serverless
      • Provides a relatively simple, cost-effective option for infrequent/intermittent/unpredictable workloads
        • Good for new applications
        • Development and test databases
        • Multi-tenant applications
          • Incoming load aligned with incoming revenue
      • Same resilience as Aurora
        • 6 copies across AZs
      • On-demand: Scale-from-zero, scale-to-zero
        • Config setting for 
          • Pause compute capacity after X consecutive minutes of inactivity
        • Might not scale from zero fast enough – in that case you can set a minimum capacity
          • Took about 30 seconds when I tested it with WordPress with min=1, max=2 ACU
      • Pay per second
      • ACUs – Aurora Capacity Units
        • Allocated from a warm pool
        • Cluster has a min/max ACU
        • Cluster adjusts based on load
      • Shared proxy fleet for connections
        • Application connects to proxy which connects to an ACU
      • Can restore an Aurora snapshot to an Aurora Serverless instance
    • Aurora Multi-master
      • For fault-tolerance (continuous availability)
      • Scale out write performance across multiple AZs in a region
        • Maximum number of 4 instances
        • To maximize performance there should be many concurrent write operations
        • Single query performance is generally lower than standard Aurora
      • All instances are read/write
      • By default read-after-write is only guaranteed for the same instance
        • Can set Global Read-After-Write (GRAW) for full read consistency
      • Uses Paxos-like quorum system for writes
        • Once a write is accepted, the change is replicated to all instances 
          • A quorum have to acknowledge writing the data for the write to succeed
      • Lots and lots of limitations
        • Max cluster size is 128 TB
      • Recommended workload types
        • Active-Passive
          • Use one instance for all read/writes
          • If the instance stops responding, switch to another instance
            • No failover needed
        • Active-Active
          • Application needs to be structured to minimize write conflicts
            • If write conflict happens it will roll back the transaction
              • For example, using sharing or multi-tenancy
          • Application controls which instances are used for which writes
            • It manages all of the connections 

Rekognition

  • Image and video analysis using ML

Redshift

  • Columnar database
  • OLAP for data warehousing and BI applications
  • Is relational but not designed for OLTP workloads
    • Connect with JDBC/ODBC
  • Can store up to 16 PB of data
  • Sub-second response times
  • Redshift Spectrum
    • Include S3 data in your Redshift queries
      • Supports Avro, Parquet, ORC, CSV/TSV, JSON, etc.
    • Scales out to thousands of instances if needed
    • Serverful version of Athena
  • Federated query against other databases
  • Server-based (not serverless)
  • Single AZ cluster
    • Not highly available
    • Runs in separate, isolated network
    • Leader node 
      • Query input, planning and aggregation
    • Compute node
      • Performs queries
      • Divided into multiple slices
        • Number of slices is based on machine size
        • All slices work in parallel
        • Work assigned to slices by leader based on (optional) distribution key
      • Data replicated to one other node when written
      • You can scale the compute nodes up or out
  • Snapshots
    • Automatic snapshots to S3
      • Every 8 hours or 5 GB with configurable retention period 
        • Default retention 1 day, up to 35 days
      • Can configure automatic snapshots to be copied to another region
        • Separate configurable retention period
    • Can also create manual snapshots
      • No retention period
    • Sharing snapshots with other accounts
      • Only works for manual snapshots
        • You can copy an automatic snapshot to a manual one
      • Add the account that you want to share with to the snapshot settings
        • The snapshot will show up in the other account’s console/CLI/etc
      • In the destination account, restore a cluster from the snapshot
  • Getting data in and out
    • LOAD and UNLOAD from S3
    • COPY from DynamoDB
    • DMS can migrate data in/out
    • Kinesis Firehose can stream data in
  • Enhanced VPC routing
    • Optional feature to route all network traffic through VPC
    • Use regular VPC features
      • Security groups, NACLs, VPC endpoints, endpoint policies, internet gateways, and DNS

Route 53

  • Global service 
    • Single database
    • HA across all regions
  • DNS serving and registrar
    • Can take up to 3 days to register a name
  • Hosted zones
    • Zones managed on AWS name servers
    • Hosted on four managed name servers
    • Can be public or private 
    • Public Hosted Zones
      • Accessible from the public internet and VPCs
        • At the VPC+2 address if the resolver is enabled 
      • Each zone hosted on 4 Route 53 name servers
      • Can register a domain elsewhere and use Route53 to manage the DNS
      • Monthly cost to host the zone plus small charge per query
    • Private Hosted Zone
      • Associated with VPCs and only accessible from those VPCs
      • Use console/CLI/API to associate the zone with a VPC in your account
        • Use CLI/API to associate the zone with a VPC in a different account
      • Can have split-view (overlapping public/private zones)
        • Inside associated VPCs both public and private names can be resolved
      • Associated VPCs use the VPC+2 resolver address
      • Use VPC DHCP Option Set to set same domain name for instances in VPC
  • Alias records
    • Similar to a CNAME but can also point to the root (apex) of a domain
    • Can route to AWS resources
      • It recognizes changes in the underlying resource and responds appropriately
      • Uses TTL of underlying resource or record
    • Resource types
      • API Gateway
      • VPC interface endpoints
      • Cloudfront distribution
      • Elastic Beanstalk environment
      • S3 bucket
      • Other DNS record
    • Responds to queries if the requested record type matches the underlying record type
      • For example, create an A record alias for S3 buckets, ELB, etc
      • When responding to dig or nslookup, the alias looks like the underlying record
    • No charge for queries to alias pointing to AWS resource
      • But CNAMEs are charged
  • Routing policies
    • Simple routing
      • One record (for a name) with multiple addresses
      • Returns one of the addresses randomly
      • No health checks
    • Failover routing
      • Active-Passive failover
        • All other policies besides Simple are called Active-Active failover
      • If the active fails health check, will failover to passive
        • Could be a static S3 site
      • Can have multiple primary and/or secondary resources
    • Weighted routing
      • Send percentage of traffic to resource
        • Add weights and divide by total to get percentage
      • For simple load balancing or testing new software versions
      • Zero-weight records
        • If any non-zero records are healthy the zero records will be ignored
        • If all non-zero records are unhealthy the zero records will be used
          • Essentially works as active-passive failover
        • If all records are zero then all are considered to be active
      • If chosen record is unhealthy then it is discarded and the selection process is repeated until a healthy record is chosen
    • Geolocation routing
      • IP check determines the location of the user
      • Records are tagged with location
        • Default (world), continent, country, or subdivision (state)
      • Tries to find a record matching the user’s location starting with subdivision/state, then country, then continent
        • If no matching location then the record tagged Default is returned if it exists, otherwise NXDOMAIN
      • Use cases
        • Localized (language-specific) sites
        • Regional restrictions
        • Load balancing across regional endpoints
    • Multivalue answer routing
      • Supports multiple records with the same name
      • Up to 8 healthy records are returned
        • If more than 8 exist, 8 random records are returned
      • Health checks can rule out some addresses
    • Latency-based routing
      • Routes to the region with the lowest latency for the user
        • Returns record which offers the lowest estimated latency and is healthy
      • Supports one record for each name in each AWS region
      • Use when optimizing for performance and user experience
    • Geoproximity routing
      • Aims to find the record with the lowest distance to the user
      • Records can be tagged with an AWS region or lat/long
      • Bias parameter
        • Bias grows or shrinks a region compared to its usual size
        • A larger bias reduces the distance to a region
        • For example, I should have been routed to us-east based on my location but I got routed to London instead because it was heavily biased
    • Use Traffic Flow to create hierarchy of records that combine multiple routing policies
      • For example, you might create a configuration in which latency alias records reference weighted records, and the weighted records reference your resources in multiple AWS Regions. 
      • Each configuration is known as a traffic policy.
      • Visual editor to construct traffic policies
  • Health checks
    • If you are using AWS resources that you can use Alias records for
      • Specify Yes for Evaluate Target Health
      • Don’t assign health checks to records
    • Can assign to any record except for Simple routing
    • Separate from, but used by records
    • Health checkers are located globally
    • If 18%+ of health checkers report as healthy, the health check is healthy
    • Checks every 30s by default (every 10s costs extra)
    • Types
      • Endpoint: TCP, HTTP/S, or HTTP/S with string matching
      • Status of other health checks (calculated)
      • State of Cloudwatch alarm
    • Can configure SNS notifications for failures
    • Can customize health checker regions
  • Interoperability
    • Route53 has two functions
      • Domain registrar
      • Domain hosting
    • Route53 can do both for a domain or either one
    • Usual steps for “both”
      • R53 accepts your money (domain registration fee)
      • R53 allocates 4 name servers (hosting)
      • R53 creates a zone file on each of the name servers (hosting)
      • R53 communicates with the registry of the TLD (domain registrar)
        • Registers the 4 name servers for the domain 
        • Pays fee to the TLD registry organization
    • R53 is only used for domain registration
      • Not common
    • R53 is only used for zone hosting
      • More common
      • Either you already have a legacy domain registered with another company or you have to use another company for registration some other reason
  • On Premises 
    • Inbound Endpoint: for resolving AWS resources from on-premises
    • Outbound Endpoint: for resolving on-premises names from AWS

Simple Storage Service – S3

  • Public service
  • 11 9s durability
  • Files up to 5TB
  • Unlimited storage
  • Buckets per account
    • 100 by default
    • 1,000 maximum with service limit increase
    • Use prefixes to divide buckets
  • Unlimited objects per bucket
  • Global namespace
    • But storage is regional
    • https://bucket-name.s3.region.amazonaws.com/key
    • Bucket name must be unique globally
      • 3-64 chars
      • Must start with a lower-case letter or number
      • Cannot look like an IP address eg. 1.1.1.1
    • Key is object name, eg. fred.jpg
      • Key can have prefix that looks like a directory path
        • path/to/fred.jpg
  • Read after write consistency
  • Successful upload: 200 status code
  • Buckets are private by default
  • Pricing for Standard storage class
    • Storage per GB-Month
    • Per 1000 operations/requests
    • Transfer
      • Per GB out of S3
      • No data transfer charges for data transferred in from the internet
    • Free tier
      • 5 GB storage
      • 20,000 GET requests
      • 2,000 PUT requests
  • Block Public Access
    • By default new buckets and objects do not allow public (anonymous) access
    • Block Public Access settings
      • BlockPublicAcls
        • PUT calls fail with new public ACLs
      • IgnorePublicAcls
        • PUT calls with public ACLs succeed but objects are not made public
      • BlockPublicPolicy
        • Disallows new bucket policies that have public access
      • RestrictPublicBuckets
        • Blocks public and cross-account access to a bucket with public policy
  • Bucket Policies and Object ACLs
    • To allow public access must allow public access to the bucket and object
      • Object ACLs to make individual objects public
      • Bucket policy to make bucket public
    • Bucket policy is a form of a Resource Policy
      • Like identity policies but attached to a resource (eg bucket)
      • Unlike an identity policy, a resource policy has a Principal
        • Can Allow/Deny different accounts
        • Can Allow/Deny anonymous principals. (Principal: “*”)
    • Bucket policies enforce, eg.
      • Grant permissions to accounts / users
      • SSL requests only
      • Require certain ACLs on PUT
        • For example, grant cross account permissions to objects but bucket owner has full control
      • Require MFA
      • Limit access to / block specific IP addresses
    • Bucket can only have one policy, but it can have multiple statements
    • Access to a bucket combines the identity policy and the bucket policy
    • ACLs are a deprecated mechanism, replaced by bucket policies
    • Identity policies
      • Policies created in IAM to control access to S3 
      • By default only the root user of the account that created the bucket can access that bucket
      • Identity policies can only affect identities in the same account
  • Object Ownership
    • By default, objects are owned by the uploading account
    • Can set Object Ownership property on bucket to Bucket Owner Preferred
      • If object is uploaded with ACL bucket-owner-full-control then bucket owner will own object
      • Can enforce with a bucket policy that denies PUT unless that ACL is present
  • Access points
    • Named endpoints attached to buckets
      • Each endpoint has distinct permissions
      • A way to customize access differently for different users based on URL
    • Multi-region Access Points
      • Global endpoint to fulfill requests for S3 buckets located in multiple regions
      • Provides a simple regional UI to buckets in multiple regions
      • Routes requests to S3 over global accelerator network
  • Versioning
    • Versioning is enabled at the bucket level
    • When disabled created objects will get null version ID
    • When versioning is enabled
      • Objects added will get a version ID – the “current” version
      • Deletes will add a delete marker
      • You can delete the delete marker
      • You can delete a version by specifying the version ID
        • If it is the current version the next most recent version becomes current
    • Once versioning is enabled it cannot be disabled, only suspended
    • When suspended
      • Objects added will have a null version ID
        • If a non-null ID object version already exists the null ID version will be added as the latest
        • If a null ID version already exists it will be replaced
      • Null version ID object versions will be deleted
        • If a non-null ID object version exists a delete marker will be added
    • Can be used as a backup tool
    • You are billed for all versions
    • A bucket can be configured to require MFA for 
      • Deleting a version
      • Changing the versioning state
      • Pass both MFA serial number and MFA code in API call
  • Storage classes
    • S3 Standard
      • 99.999999999% (11 9s)
      • >=3 AZs
      • MD5 checksums and CRCs are used to detect and fix any data corruption
      • No retrieval fee, no minimum duration, no minimum size
      • First byte latency in milliseconds
      • Minimum duration of 30 days before moving to IA
      • Frequently accessed data which is important and non-replaceable
    • S3 Standard Infrequent Access (IA)
      • Same as Standard, except
      • Cheaper than Standard
      • Minimum billable duration of 30 days
      • Minimum billable size of 128KB per object
      • Adds a retrieval fee
      • Long term critical data
    • S3 One Zone Infrequent Access
      • Same as IA, except
      • 1 AZ
      • Cheaper than IA
      • For long-lived, non-critical/replaceable, infrequently accessed data
    • S3 Glacier
      • For archival purposes (cold objects)
      • Same as Standard, except
      • 20% of the cost of Standard
      • Cannot be made publicly accessible
      • Must run a retrieval process to access objects
        • Objects copied to IA temporarily
        • Retrieval options (faster is more expensive)
          • Expedited: 1 to 5 minutes
          • Standard: 3-5 hours
          • Bulk: 5-12 hours
      • 40KB minimum billable size
      • 90 day minimum billable storage
      • Automatic server-side encryption is enabled on all objects (AES-256)
        • You have a choice of keys that can be used but cannot choose whether encryption is enabled or not.
      • Retrieval
        • Asynchronous
          • Initiate a retrieval
          • Wait for data to be ready
            • SNS or polling
          • Download data
          • Data is available for 24 hours
        • Provisioned Capacity
          • Ensures that the capacity for an Expedited retrieval is there when you need it
          • Purchase if your workload requires highly reliable and predictable access to the archive data
          • Each unit of capacity provides that at least 3 Expedited retrievals can be performed every 5 minutes without to 150 MB/s of throughput
            • Lasts for one month
        • Ranged Archive Retrievals
          • You can optionally specify a range of the archive to retrieve
          • Scenarios
            • Spread the download across the 24 hours after the data is ready
            • Download a subset of data
    • S3 Glacier Deep Archive
      • For data that you might never need to access again (frozen objects)
      • Same as Glacier, except
      • 25% of the cost of Glacier
      • 180 day minimum billable duration
      • Retrieval options (faster is more expensive)
        • Standard: 12 hours
        • Bulk: 48 hours
    • S3 Intelligent Tiering
      • Automatically moves objects to best tier
        • Frequent
        • Infrequent 
        • Archive
        • Deep Archive
      • Optionally, configuration can be set on buckets using prefixes and tags to move objects into the archive tiers
        • Archive tiers won’t be used by default
      • Charge per 1,000 objects for automatic management
        • Otherwise the cost is basically the same as the tier 
      • For long-lived data with changing or unknown usage patterns
  • Selecting storage class using REST API
    • Use the x-amz-storage-class header
  • To ensure that data is not corrupted on upload
    • Pass the MD5 hash in the Content-MD5 header
    • Alternatively compare the ETag header value in the response
  • Create a copy of an object
    • PUT with x-amz-metadata-directive: COPY
  • Using POST instead of PUT
    • To support browser-based uploads
    • Parameters as passed as POST form fields instead of headers
  • Lifecycle management
    • A set of rules on a bucket
    • Rule properties 
      • Name
      • Scope
        • All objects or subset based on tags/prefixes
      • Transitions
        • For current and/or previous versions
        • List of 
          • After X days, do transition to class C
        • Transitions can only flow “down” to colder storage classes
          • Standard
          • Standard IA
          • Intelligent Tiering
          • One-Zone IA
          • Glacier
            Glacier Deep
        • Be careful about small objects due to minimum storage sizes
          • Could be more expensive to store small objects in Glacier than in IA
        • Object must stay in a class for 30 days before transitioning to IA
      • Expiration
        • Deletes current or previous versions after X days
  • S3 Object Lock / Glacier Vault Lock
    • Makes buckets / objects read-only
    • Governance mode: cannot overwrite or delete an object version or lock settings without special permissions
    • Compliance mode: Same as Governance but also prevents root user and like
    • Glacier Vault lock policy
      • Once set cannot be changed
      • Can set how long objects must be read only
      • Can deny deletion based on a tag (for example, legal holds)
  • Encryption
    • Metadata is not encrypted
      • Objects are encrypted, not buckets
    • HTTPS in transit
    • Client-side encryption
      • Encrypted / decrypted at client
      • You need to manage the keys and which key goes with which object
    • Server side encryption at rest
      • SSE-C: Customer provided key
        • Must track which key was used for which object
        • Hash of key stored with object, actual key discarded
          • Key hash returned with object on GET
        • S3 does the encryption using AES256
      • SSE-S3 (aka AES-256)  
        • Default encryption type
          • Convenient but not applicable for some compliance regimes
        • Master key managed by Amazon and rotated regularly
        • No way to audit usage
        • Data key created per object to encrypt that object
          • Data key is encrypted with master key and stored with object
      • SSE-KMS
        • Can use either Amazon managed key or CMK
          • If CMK used, you can control the permissions and rotation to satisfy compliance 
          • For example, even if someone has full admin access to S3, they may not have access to the CMK
          • Can also audit access to CMK
          • Can enable automatic rotation every year
        • KMS provides S3 with two copies of a Data Encryption Key 
          • One encrypted by the CMK, one not
        • The unencrypted DEK is used to encrypt the object and the encrypted DEK is stored with the object
        • Use bucket keys (one KMS key per bucket) to keep the KMS expense down
          • The bucket key will be used instead of a CMK to generate DEKs
          • KMS operations are limited.  Region specific: 5,500/10,000/30,000 requests per second depending on the region.  
    • Objects are encrypted on PUT when header is passed
      • x-amz-server-side-encryption={AES256,aws:kms}
    • Can require server-side encryption using a bucket policy
    • Default encryption can be set on a bucket so that the header doesn’t have to be passed
      • You can pass a header to override the default
      • SSE-S3 or SSE-KMS supported
  • Performance
    • Prefix: the directory part of the path
      • Use multiple prefixes to scale performance
      • 3500 PUT/COPY/POST/DELETE requests per second per prefix
      • 5500 GET/HEAD requests per second per prefix
    • Single PUT upload
      • Must use for uploads < 100MB
      • If there is a failure midway through the whole upload fails
      • Doesn’t use the full bandwidth
    • Use multipart uploads to parallelize.  Should use for files >100MB and must be used for files >5GB
      • 10,000 max parts ranging from 5MB to 5GB
      • If a part fails just that part is restarted
      • More likely to use full bandwidth
    • Use byte-range fetches to parallelize downloads
  • Transfer acceleration
    • Requests are routed from edge over Amazon’s internal network
    • Bucket name must not contain periods (.) and must be DNS-compatible
    • Unique endpoint generated for the accelerated bucket
    • The greater the distance from the upload location to the bucket the greater the benefit of Transfer Acceleration
    • Compared to Cloudfront
      • Cloudfront is for caching at the edge
      • Transfer acceleration is for faster access directly to the buckets especially for uploads
      • Transfer acceleration should be used for higher throughput
        • For objects < 1GB or data set < 1GB Cloudfront PUT/POST is more optimal
  • Replication
    • Cross- or Same-Region Replication
      • Configuration stored on source bucket
      • Destination bucket could be in different account
      • Encrypted in transit
      • Permissions
        • Same account
          • Role gives access to both buckets
        • Different accounts
          • Role gives access to source, bucket policy on target allows role
    • Replication options
      • All objects or a subset
      • Storage class for replicas – default is same as source
      • Ownership – default is same account as source
      • Replication time control (RTC)
        • SLA is 99.99% replicated within 15 minutes
        • Also provides metrics and notifications
    • Existing objects in source bucket are not replicated automatically when enabling replication
    • Source and destination buckets must have versioning enabled
    • One way replication
    • Handles unencrypted, SSE-S3 and SSE-KMS encrypted objects
    • Source bucket owner needs permissions on objects
    • System events are not replicated
    • Delete markers are not replicated by default
      • Delete versions are not replicated at all
    • Metadata is not replicated by default
    • Cannot replicate from Glacier/Deep buckets
    • Scenarios
      • Same-region replication
        • Log aggregation
        • Syncing data between environments
        • Strict sovereignty 
      • Cross-region replication
        • Global resilience (backup to DR region)
        • Latency reduction
  • Static website hosting
    • HTTP only – for HTTPS use CloudFront
    • Use cases
      • Offloading
        • Large media files (images, etc)
      • Out-of-band pages
        • For example, status/maintenance pages
          • Will always work even if main site offline
    • Bucket configuration
      • Turn on Static Website hosting on the bucket
        • Configure index document and optional error document
      • Set permissions
        • Disable Block Public Access
        • Add bucket policy to allow all principals (*) s3:GetObject on arn:aws:s3:::BucketName/*
    • Custom domain name
      • Either use Route 53 or CNAME
        • CNAME would point to 
          • www.example.com.s3-website.Region.amazonaws.com
      • Bucket name must match domain name exactly
        • example.com
  • Event Notifications
    • S3 can generate events related to objects (not buckets)
      • Created, Removed, Restored, Replicated, Reduced Redundancy Lost
    • Destinations
      • SNS, SQS, Lambda
      • Must be in same region
      • SNS and SQS standard only
      • SNS/SQS/Lambda must have policies attached that grant S3 access
    • Configured on a bucket
      • Wait 5 mins for first event: s3:TestEvent
      • Can filter on prefix and/or suffix of object key
    • Notifications are delivered in under a second
    • Free feature
      • SQS/SNS/Lambda charges apply
    • EventBridge is an alternative to S3 event notifications
      • Supports more types of events and more destination services
  • Querying
    • S3 Select / Glacier Select
      • CSV, JSON or Parquet data
        • Can use bzip2 compression for CSV and JSON
      • Subset of SQL
      • Runs in S3 service
      • Like a push-down filter
        • Much faster and cheaper to get the result
    • Athena – serverless SQL
      • Pay only for data consumed
      • Supports many file formats
        • XML, JSON, CSV/TSV, Avro, Parquet, ORC, Apache, CloudTrail, VPC Flowlogs, etc
      • Output can be sent to other services / visualization tools
      • Schema-on-read
        • Translates on the fly into a table structure
        • Define the schema before running queries
          • AWS Glue can be used for this
  • Presigned URLs
    • A way to temporarily share individual private objects in a bucket
    • When creating you must provide
      • Your own security credentials
      • HTTP method to use
      • Expiration time
        • Max is 604800 seconds = 7 days
    • Anyone with access to the URL has access to the object 
      • As the user whose credentials were used
        • WIth the permissions that user has at the time the URL is accessed
      • Typically create a service user for this
    • You can even create pre-signed URL for
      • Non-existent object
      • Object you don’t have access to
    • Don’t use a role for the credentials
      • The role’s credentials will expire earlier than the URL expires
  • Presigned cookies
    • Useful when you want to provide access to multiple objects (using a prefix)
    • Cookie is stored on user’s browser
  • Batch Operations
    • Run an S3 inventory report to get a list of target object (or provide list)
    • Choose either an API action or Lambda to run against each object
    • Manages retries
    • Sends notifications and completion reports
  • Access logs
    • Content is similar to HTTP access logs
    • Enable on a source bucket to log to a target bucket
    • Best effort delivery
      • Accesses to source bucket delivered to target bucket within a few hours
    • S3 Log Delivery Group
      • Need to put ACL on target bucket giving S3 Log Delivery Group access
    • Target bucket can store logs for multiple source buckets
      • Use prefixes to segregate
    • You need to manage purging old logs
  • Multi-region Access Points
    • Single global endpoint
    • Access data set that spans multiple S3 buckets across multiple regions
    • Uses Global Accelerator under the hood
  • Cross-origin resource sharing (CORS)
    • Need to enable for access to S3 URLs from pages / Javascript with different domain
    • Even if the page is from a S3 static website because the website hostname will be different from the object hostname even If it’s the same bucket
      • Static website base URL: http://bucket-name.s3-website.region.amazonaws.com
      • Object URL: https://bucket-name.s3.region.amazonaws.com/key
    • Bucket CORSRule
      • Request’s Origin header must match AllowedOrigin element
      • Request method (GET, PUT, etc) or Access-Control-Request-Method header for pre-flight OPTIONS request must match one of the AllowedMethod elements
      • Every header listed in the pre-flight request’s Access-Control-Request-Headers must match an AllowedHeader element

Secrets Manager

  • Store application secrets, passwords, API keys, SSH keys, etc.
  • Useable via console, CLI, API or SDKs
    • Designed for application use via SDK
  • Uses KMS for encryption
  • Rotates credentials
    • Uses lambda
    • Directly integrates with some services 
      • RDS, DocumentDB, Redshift
    • Can customize to integrate with other AWS or custom services
    • Be careful, when rotation is enabled it will rotate the key immediately
      • Make sure to coordinate access with resource that uses the password (eg. database)
      • Make sure to have applications configured to use Secrets Manager first
  • Comparing vs Parameter Store
    • If you are trying to minimize costs use Parameter Store
    • Use Secrets Manager if you need 
      • Key rotation 
        • Especially with integration with RDS, etc
      • Ability to generate passwords using Cloud Formation

Shield

  • Free DDoS protection for AWS resources
    • Integrated into Route53 and Cloudfront
  • Protects against layer 3 and layer 4 attacks (DDoS)
  • Global perimeter protection
  • Advanced version costs $3k/month but you get a dedicated 24/7 DDoS response team
    • Always on flow-based monitoring
    • Real-time notifications
    • Protects AWS bill against higher fees due to DDoS
    • Per Adrian this is required for any serious site

Simple Workflow – SWF

  • A fully managed state tracker and task coordinator.   
  • You create workflows with their associated tasks and conditional logic.
  • Regional service with per-region endpoints

Site-to-Site VPN

  • Connects a VPC to on-premise network
  • Encryption in-transit using IPsec
  • Typically runs over internet
  • Full HA if implemented correctly
  • Quick to provision (less than one hour)
  • Can optionally enable acceleration using Global Accelerator
  • AWS target can be either
    • Virtual Private Gateway (VGW)
    • Transit Gateway (TGW)
  • Virtual Private Gateway (VGW)
    • Can be the target of a route table entry
    • Lives in AWS Public Zone in a region
    • Is attached to a VPC
    • Has two interfaces in separate AZs with public IPv4s
      • Both are active connected to CGW router
  • Customer Gateway (CGW)
    • Either refers to the logical configuration in AWS or the physical device on-premises
    • Has a public internet-routable IPv4 address
    • Single router is SPOF
      • “Partial HA” since it’s HA on the AWS side
    • Add another CGW router preferably in a separate building
      • And create another VPN Connection from VGW to that router
        • Another pair of physical endpoints on the AWS side
  • Site-to-Site VPN Connection
    • Connects CGW and VGW/TGW
    • Uses either static or dynamic routing
    • A pair of physical interfaces in the VGW/TGW connecting to a specific CGW on-premises
      • Each interface in a different AZ
      • Creates a pair of tunnels to CGW
    • Defines an authentication method (eg. Pre-shared key)
    • Use two VPN connections using two separate CGWs for full HA
  • Static routing
    • IP addresses and routes are hard-coded
    • Limited load balancing and multi-connection failover
  • Dynamic routing
    • Routes are configured using BGP
    • Required by Direct Connect
    • Can add static routes if desired
  • Route propagation
    • New routes learned from the VGW are dynamically added to route tables 
    • Can enable on a VPC route table where the VPC is attached to a VGW
    • More specific routes take priority
      • Eg. /24 over /16
      • If the same, static used over propagated
  • VPN Considerations
    • 1.25Gbps speed limit
      • Can scale this up using Transit Gateway and Equal Cost Multipath (ECMP)
      • Create multiple VPN Connections (tunnel pairs) from CGW to Transit Gateway
      • Use ECMP to spread traffic across all of the tunnels
      • Must use dynamic routing
    • Latency is inconsistent due to transiting public internet
    • Hourly cost, GB out cost, potential data cap from ISP
    • Can be used as a backup for Direct Connect
      • Or as a limited Direct Connect – instead of two fiber connections, use one + VPN
    • Good way to get initially connected while Direct Connect is getting set up
    • No transitive routing
      • For example, if VGW attached to VPCA, and VPCA peered with VPCB, cannot send to VPCB from customer
      • Transit Gateway fixes this
  • VPN Hub
    • Connect multiple on-prem branch offices
      • One VPG to multiple CGWs
    • Offices can communicate with each other over the hub
    • Can also include Direct Connect connections to the VPN’s VPC
  • Other VPNs not part of AWS Site-to-Site VPN
    • Software Site-to-Site VPN
      • This describes running VPN software on an EC2 instance 
        • A VPN appliance
      • A VPN on the customer side would connect to the VPN appliance via IGW
      • Or you could connect to another VPN appliance in another VPC in another region
        • As an alternative to VPC peering
      • Fully customer managed
        • Used for compliance or for legacy VPN devices that aren’t supported by AWS
    • Client VPN
      • Fully managed VPN endpoint attached to a VPC subnet
      • Scalable and highly available
      • Supports AD and SAML authentication
      • For remote access using OpenVPN-compatible clients
        • Except for federated authentication – must use AWS Client VPN software
      • TLS-based encryption

Snow Family

  • Transfer data to/from AWS using physical devices
    • Data is transferred to/from S3 standard
      • If you want it to go to another storage class set up a zero-day lifecycle policy
        • Set up the lifecycle policy before sending the snow* to AWS
  • Snowcone
    • 2 CPU and 4 GB ram
    • Up to 8 TB data
    • Good for space/power constrained
    • IoT sensor integration
  • Snowball
    • Storage only
    • Stores 50TB or 80TB data
      • 10 TB to 10 PB total storage economical range using multiple devices
    • 1 Gbps or 10 Gbps
    • Encryption using KMS
  • Snowball Edge
    • Both storage and compute
    • Larger capacity than Snowball
    • Faster network
      • 10, 10/25, 45/50/100 Gbps supported
    • Storage optimized
      • 80 TB, 24 vCPU, 32 GB RAM
    • Storage optimized with EC2 adds 1 TB SSD
    • Compute optimized
      • 100 TB, 8 GB NVME, 52 vCPU, 208 GB RAM
    • Compute optimized with GPU adds GPU
    • Ideal for remote sites where data processing is needed
  • Snowmobile
    • Data center in a single truck
    • Ideal for single location with 10 to 100 PB data
    • Run cables from truck into on-prem data center
    • Not economical for multi-site or sub-10PB
  • Turnaround is usually a week

Simple Notification Service – SNS

  • Public AWS service
    • Can be accessed from the internet 
      • Assuming proper permissions
    • Can also access from VPC private subnets
  • Pub/Sub 
    • A Publisher sends messages to a Topic
    • Topics have Subscribers which receive messages
  • If you need someone to know that an AWS event happened, use SNS
  • SNS used across AWS for notifications
    • EC2, ASG, Cloudwatch, CloudFormation, etc
  • Can send to SQS, email, SMS (texts), web, …
  • Can fan out to multiple subscribers
    • Ie. multiple SQS queues
    • Use subscription-based filtering to route different messages to different subscribers
      • Controlled by subscription policy – decides which messages match filter
  • 10 million subscriptions per topic, and 100,000 topics per account
    • Can ask for increase
  • Can get delivery status for eg
    • HTTP/S, Lambda, SQS
  • HA and scalable within a region
  • Can be used across accounts
    • Using a resource policy
  • Settings
    • Subscriber type
      • Kinesis Data Firehose
      • SQS
      • Lambda
      • Email
      • HTTP/S
      • SMS
      • Platform endpoint (mobile device push)
    • Message size
      • Up to 256 KB of text
    • DLQ
    • FIFO or Standard
      • FIFO only supports SQS FIFO as subscriber
    • Encryption
      • In transit (by default) plus optional at-rest
    • Access policy
      • By default access is granted to the owner only
    • Retries 
      • HTTP/S only

Simple Queue Service – SQS

  • For decoupling and asynchronous communications
  • Send and receive messages at any volume
  • Public service
  • Unidirectional
  • Must poll for messages
    • Except for SNS fanout – that will push messages into SQS queue(s)
  • 256 KB max message size
    • Text messages only
  • Billed per request
    • Batching, long polling cheaper
  • Batching
    • Max 10 messages per batch
    • Max size of batch is 256 KB
  • Settings
    • Delivery delay
      • 0 (default) to 15 minutes
    • Encryption
      • In transit (by default) plus optional at-rest
    • Retention
      • Between 1 minute and 14 days (4 days default)
    • Receive message wait time
      • Long versus short polling (short is default but long should almost always be used)
    • Access Policy
      • Defines who can access the queue
    • DLQ
  • Scaling
    • Use horizontal scaling to increase read/write throughput
      • More clients and/or more threads per client
    • Can use queue length as a metric for an ASG
  • Standard vs. FIFO
    • Standard
      • Can duplicate messages (at-least-once delivery)
      • Not guaranteed to be in order
      • Faster and cheaper than FIFO
      • 3,000 requests per second
      • 30,000 rps with batching
    • FIFO
      • To receive messages in order
      • Optional de-duplication
      • Slower and more expensive than standard
        • 300 per second or 3,000 per second with batching
      • Message groups
        • Bundle of messages with a group ID
        • Messages are sent in order per message group
      • High-throughput FIFO setting
        • 3,000 requests per second per message group
        • 30,000 rps with batching
  • Visibility timeout
    • One received the message is hidden from other consumers
    • Client must delete the message from the queue before the timeout otherwise it will be reappear in the queue
  • Dead-letter queues
    • If a message has 5 visibility timeouts it is moved to the DLQ
    • Just regular queues
      • Queue type must match source queue
      • Same retention window (max 14 days)
    • Useable for SNS
    • Can use same DLQ for multiple sources
    • Should set alarm for DLQ queue depth

Single Sign On – SSO

  • Use with either
    • External identity provider like Okta
    • AWS SSO-managed identity store
  • AWS SSO Instance
    • Regional
    • Identified by ARN and Start URL
  • Administrators create
    • Permission sets
      • Collection of IAM policies
    • Assignments
      • Principal from identity store
      • Permission set
      • Target account
      • Means principal can use those permissions in that account
  • User interact with Role Names
    • Actually Permission Set names
  • Signing in
    • Authenticate with AWS SSO via identity provider
      • aws sso login
    • Always happens through browser
      • So if you are already logged into your IdP you don’t need to do anything
    • OAuth access token delivered to app/script/CLI
      • After browser confirmation that it’s ok
  • Access token is used to get AWS credentials for the specific Role Name and account requested
    • You can do this by configuring a profile in ~/.aws/config
[profile my-sso-profile
sso_start_url = https://example.awsapps.com/start
sso_region = us-east-1  # the region AWS SSO instance is configured in
sso_account_id = 123456789012
sso_role_name = MyRoleName
region = us-east-2      # the region to use for AWS API calls
  • AWS CLI and SDKs handle this automatically

Storage Gateway

  • Hybrid storage for on-prem use
    • Merge on-prem with cloud
    • Use DataSync for one-time migrations
    • Implemented as one of
      • Virtual Machine appliance
        • VMware ESXi, Hyper-V, KVM
      • Hardware appliance 
        • 1U rack mounted box
      • EC2 instance
        • In case data center goes offline
  • File Gateway
    • NFS or SMB interface on-prem
    • SMB shares can integrate with AD for permissions
    • Stores files 1:1 as S3 objects
      • Existing objects will show up as files in the share
      • When changes are made to files new versions of S3 objects are created
        • If versioning is enabled
      • Lifecycle policies can automatically control storage classes
        • And clean up old versions
      • SSE-S3 encryption
    • Asynchronous updates
      • Use CloudWatch events to determine when all changes in S3
    • Can keep cached copy of recently used files
      • Local volume is used
    • Helps with migration to AWS
  • Volume Gateway
    • iSCSI block mount
    • Types
      • Stored
        • All files stored locally on network storage volume(s) attached to VM
        • Backed up asynchronously to AWS
          • Via upload buffer
          • Creates EBS snapshots in S3
          • Can restore snapshot to local volume or EBS volume
        • 16 TB per volume x 32 volumes max = 512 TB total capacity
        • If you need low-latency access to your entire dataset
        • Ideal for cost effect backup or rapid DR
      • Cached
        • No permanent local storage unlike stored mode
          • Just cache and upload buffer
        • Should allocate 20% of the size of the existing file store size as cache
        • Only frequently accessed files’ blocks stored on appliance
        • Primary data stored on an S3-backed EBS volume in AWS
        • Up to you to take snapshots
          • Snapshots stored as EBS snapshots
          • Can be restored to gateway or to EBS volume
        • 32 TB per volume x 32 volumes = 1 PB total capacity
        • Designed for extending capacity into AWS
        • Much cheaper than local storage
  • Tape Gateway
    • Presents iSCSI tape interface to on-prem
    • Virtual tapes 100GB to 5TB
      • 1PB total storage across 1500 virtual tapes
    • Backs up to S3
      • Glacier or Glacier Deep Archive

Systems Manager

  • Free suite of tools designed to let you view, control and automate EC2 and on-prem systems
    • Automation documents
      • aka Runbooks
      • Configure EC2 instance or other AWS resources (eg S3 buckets)
      • IAC with desired state
      • Used by AWS Config
    • Run command
      • Run a command remotely on any instance that is running the agent
    • Parameter Store
    • Patch Manager
      • Manages application versions
    • Session Manager
      • Remotely connect to instances
    • Hybrid Activations
      • Control on-prem architecture
  • Create resource group using tag query
  • For a resource group view
    • Recent API activity
    • Configuration changes
    • Notifications and alerts
    • Software inventory
    • Patch status

Transfer Family

  • Moves files to/from S3 or EFS using SFTP, FTPS or FTP
  • Useful for when you have customers that use SFTP, FTPS or FTP to transfer data to you
    • Upload it to AWS instead of on-prem
  • DNS entry stays the same

Transit Gateway

  • Significantly simplifies network topology
    • No need for VPC peering – handles transitive routing
  • Regional virtual router for traffic flowing between your VPCs
  • Highly available between multiple AZs
  • Burst speeds up to 50 Gbps per AZ
  • Resource attachments
    • One or more VPCs
    • One or more Site-to-Site VPNs
      • CGW is attached to Transit Gateway; no VGW needed
      • VPN Connection is between Transit Gateway and CGW
    • Direct Connect gateway to on-prem
    • One or more Connect (to third-party virtual appliances)
    • Peering other transit gateways in other accounts/regions
  • Share between accounts using AWS RAM
  • Supports IP multicast (not supported by any other AWS service)


Trusted Advisor

  • Free auditing tool
  • Looks at your account and tells you where you could improve adoption of best practices
  • Five areas
    • Cost optimization
    • Performance
    • Security
    • Fault tolerance
    • Service limits
  • Can send alerts via SNS
  • Doesn’t fix problem, only alerts
  • To get the most useful checks you’ll need a paid support plan
  • Free checks
    • Service limits
    • Security
      • Security groups unrestricted access to specific ports
      • Permissions on EBS or RDS public snapshots
      • Open access to S3 buckets
      • Check for IAM user besides root
      • MFA on root account
  • Use Eventbridge to kick off Lambda to fix problem(s)

Virtualization

  • Emulated virtualization
    • Hypervisor included in host OS
    • Binary translation in software
      • No changes needed to guest operating system
    • Slow
  • Paravirtualization
    • Operating system changed to know about hypervisor
      • Calls hypervisor for hardware operations
    • Faster than emulation
  • Hardware-assisted virtualization
    • Guest operating hardware access trapped by hardware and redirected to hypervisor
    • Even better performance than paravirtualization
    • Hypervisor still needs to mediate multiple guests access to hardware devices
  • SR-IOV
    • Hardware aware of virtualization, for example network cards
    • Appears to guest OSes as dedicated devices

Virtual Private Cloud – VPC

  • A logical data center in AWS
  • An AZ can have many subnets, a subnet is in one AZ
    • Private or public
    • Public means it has route to the internet via IGW
      • Instance must have a public IPv4 address if using IPv4
      • Public IP is not assigned to any of the network interfaces
        • It’s provided by Internet Gateway via static (one-to-one) NAT
    • Private can still access the internet – via NAT
  • Quotas
    • 5 VPCs per region (can increase to up to 100)
    • 200 subnets per VPC (can increase)
    • 5 IPv4 CIDR blocks per VPC (can increase)
    • 1 IPv6 CIDR blocks per VPC
  • Default VPC
    • One per region
    • Pre-configured – cannot change
    • One subnet per AZ in region
    • Default VPC CIDR: 172.31.0.0/16
      • Default subnets get /20s from that
        • 172.31.0.0/20
        • 172.31.16.0/20
        • 172.31.32.0/20
    • Comes with
      • Internet Gateway
      • Security Group
      • NACL
    • Subnets assign public IPv4 addresses
    • You can delete the default VPC
      • But some services assume that it’s present
      • Can recreate
  • Custom VPCs
    • Regional service with access to all AZs in that region
    • No traffic in or out without explicit configuration
    • Default or dedicated tenancy
      • If default, can pick per instance whether it is dedicated or not
    • 1 primary IPv4 private CIDR
      • Size can range from /28 to /16
    • Can expand size of VPC by adding up to four secondary CIDRs
    • Optionally an IPv6 /56 CIDR can be configured
      • Either the CIDR range is assigned by AWS
      • Or use IPv6 CIDR that you own
    • Main CIDR for the VPC divided into sub-CIDRs for subnets
  • VPC router
    • Highly available
    • Has interface in every subnet
    • Routes between subnets and egress
    • VPC has a main route table that cannot be deleted
      • A subnet can use the main route table or a custom one
    • If multiple routes match a destination address the most specific route is chosen
  • Subnets
    • Lives in one specific AZ
    • IPv4 CIDR is subset of VPC CIDR
    • Optional IPv6 CIDR /64 subset of VPC’s /56
    • Subnets can communicate with other subnets in same VPC
    • Amazon reserves 5 addresses per subnet
      • 10.0.0.0 Network address
      • 10.0.0.1 Reserved for the VPC router
      • 10.0.0.2 Reserved by AWS for DNS server
        • Only VPC+2 is used, but +2 is reserved in all subnets
      • 10.0.0.3 Reserved by AWS in all subnets
      • 10.0.0.255 Broadcast address
    • Per-subnet config settings
      • Auto-assign public IPv4
      • Auto-assign IPv6
    • Subnet has one route table
      • It’s either the main VPC route table or a custom one
      • Routes are prioritized by specificity
        • A /32 address would be the most specific if it matched
        • 0.0.0.0 (default) is least specific (::/0 for IPv6)
      • Every route table gets a Local entry for communication within the VPC
        • Cannot be deleted
      • Default route can egress the subnet via NAT, Internet Gateway or Virtual Private Gateway
  • DNS
    • VPC has two settings
      • enableDnsHostnames
        • Whether instances with public IPs get public DNS hostnames
      • enableDnsSupport
        • If true then DNS server accessible at VPC base + 2 (eg. 172.31.0.2)
          • And 169.254.169.253
      • If both are true (the default) then
        • VPC DNS server will resolve AWS DNS names
          • Inside the network both public and private DNS names are resolved to private address
        • Instances get private DNS hostnames
          • And a public DNS hostname if it has a public IPv4 address
          • No hostnames provided for IPv6 addresses
    • Hosted zones
      • Can associate with multiple VPCs
      • Names of hosts in VPCs
      • Public hosted zones are resolvable in the internet
      • Private hosted zones are resolvable in VPCs
  • DHCP options supported
    • domain-name-servers
      • Default is AmazonProvidedDNS in your VPC
    • domain-name
      • Can be a domain in a R53 private hosted zone
    • ntp-servers
    • netbios-name-servers
    • netbios-node-type
  • Can connect VPC to
    • Internet via Internet Gateway
    • Corporate data center via Virtual Private Gateway
    • Corporate data center using Direct Connect
    • Other AWS services using VPC endpoints
    • Other VPCs using VPC Peering
    • Transit Gateway
  • Internet Gateway
    • Horizontally scaled, highly available, redundant
    • Does not accrue any charges directly
      • For example, to access S3
      • But data out to the internet is charged
    • Lives in AWS Public Zone
    • Egress-only Internet Gateway for IPv6 only
      • Because IPv6 addresses are globally unique they are public by default
      • Stateful devices that allow responses
    • To create a public subnet
      • Create IGW
      • Attach IGW to VPC
      • Create custom route table
      • Associate route table with subnet
      • Have default route targeted to IGW
      • Configure subnet to allocate public IPv4 and optionally IPv6 addresses
    • IGW keeps track of the mapping between public IPv4 addresses and the private address the EC2 instance has
      • The EC2 instance does not know about its public address – it is not assigned to any of the OS-level interfaces
      • The IGW effectively does SNAT
  • NAT Gateway
    • Redundant within AZ
    • Throughput 5 Gbps -> 45 Gbps
    • No need to patch
    • Cannot use a security group with it
      • NACLs on the subnet only
    • Runs in a public subnet
      • With appropriate route table
        • Routing 0.0.0.0/0 to one of
          • Internet Gateway (NAT GW needs Elastic IP)
          • Transit Gateway
          • Virtual Private Gateway
    • Private subnet route tables should default to NAT GW
    • Automatically assigned a public IP address
      • Uses PAT (Port Address Translation): Many private IPs to one pubic IP
        • Uses ports to multiplex onto the single public IP
    • If you have resources in multiple AZs and they share a NAT Gateway
      • If the NAT Gateway’s AZ is down, resources in other AZs lose internet access
      • Need to put NAT Gateway in each AZ where internet access is needed
    • Pricing
      • Hourly + 
      • Per GB data processing + 
      • Regular data transfer out cost if data goes to internet
  • NAT Instance
    • The name for a NAT that you run on an EC2 instance
      • Not recommended – less throughput and availability and you have to manage
    • Must disable source/destination checks
      • EC2 -> Instance -> Networking
    • In the main route table add an entry for 0.0.0.0/0 -> NAT instance
    • Can also use as a bastion server
    • Can support port forwarding whereas NAT Gateway cannot
  • Security Groups
    • Controls access to a single resource (eg. EC2 instance)
      • Specifically the network interface of the resource
    • Stateful firewall
      • Connection tracking
      • Responses to outbound requests are allowed without special rules
        • And vice versa
    • Only “allow” rules
      • There is an implicit unlisted deny
    • All rules evaluated and the most permissive rule applied
    • Resources and security groups are supported as sources and destinations
      • Using a security group as a source or destination means the private IP addresses of all of the network interfaces that use that security group
  • Network ACLs
    • Controls access at a subnet boundary
      • When traffic enters the network it hits the NACL first and then a Security Group on a resource
      • Similarly when traffic leaves a subnet it hits the NACL
    • Considered optional
    • Stateless firewall
      • Responses to outbound requests are subject to inbound rules
        • And vice versa
    • VPC Default NACL allows all outbound and inbound traffic
    • Custom NACLs by default deny all inbound and outbound traffic
      • That is the last * rule on the NACL which cannot be deleted
    • Each subnet has exactly one NACL
      • Uses default NACL by default
    • A rule can refer to a CIDR only, not a logical/AWS resource
    • NACLs can only be assigned to subnets, not resources
      • A NACL can be associated with multiple subnets
    • NACLs can block IP addresses (not Security Groups)
    • Numbered lists of rules (inbound list and outbound list)
      • Processed in numerical order
      • Each rule can allow or deny traffic
      • First rule matched is used
  • NACLs with Security Groups
    • Security Groups are easier to manage so for small operations you could just use those
    • If your security posture requires granular controls use both
      • Use NACLs as the first line of defense then Security Groups to handle the remaining traffic
      • Can use Separation of Duties
        • One team is responsible for NACLs and the other team is responsible for Security Groups
  • Endpoints
    • Use case: connect to AWS services without leaving internal network
    • Created per service, per region
    • Two types
      • Interface
        • Elastic network interface with private IP address in a subnet
          • Not HA by default
            • Should be placed one per AZ to ensure HA
        • Supports many types of services including third party except DynamoDB
          • Use Gateway endpoints for that
        • Use Security Groups to control network access
        • TCP IPv4 only
        • Can attach endpoint policy controlling access to service
        • Uses AWS PrivateLink
        • Endpoint provides new endpoint-specific DNS names
          • Regional DNS name
          • Zonal DNS name
        • You can also enable private DNS for the endpoint
          • Private DNS hosted zone overrides default DNS name for service
            • For example: sns.us-east-1.amazonaws.com
              • Normally a public Internet address
            • No application code changes needed
            • The VPC also needs to have DNS enabled
      • Gateway
        • Adds route table prefix list representing the service
          • Added to all AZs in a region by default
            • Highly available by default
          • Target of the route is the gateway endpoint
        • Can only access services in the same region
        • Can only be accessed from within the VPC
        • Supports S3 and DynamoDB only
        • Endpoint policy
          • Can limit access to a subset of buckets, etc
        • S3 buckets can be set to private only
          • Allow access only from a gateway endpoint
        • Can only access the endpoint from within the VPC
        • No cost
  • VPC Peering
    • Connect VPCs via direct network route using private IP addresses
      • One peering connects two VPCs only
    • Encrypted link
    • No single points of failure
    • Instances behave as if they were on the same private network
    • Can peer with other accounts
    • Can peer across regions
      • Traffic stays within AWS network
    • Cannot use overlapping CIDRs
    • Can reference peer SGs in the same region
    • Hub and spoke model 
      • One VPC in the center that peers with others
      • No transitive peering
        • If A peers with B and B peers with C, A cannot reach C via peering, must peer with C explicitly
        • Use Transit Gateway for transitive peering
    • After accepting peering
      • Add route to peer in your route table(s)
        • Target is logical peer gateway object
      • Update security groups and/or NACLs to allow peer
      • If desired enable DNS resolution of public names in peer VPC to private addresses
  • VPC Sharing
    • Share subnets across accounts in same Organization
      • Except default VPC subnets
    • Resource share must be created in RAM
      • Then share subnet from VPC
    • Other accounts can create resources in the shared subnets
    • The same subnet name across accounts may refer to a different location
      • Use AZ IDs to uniquely identify an AZ location
  • AWS PrivateLink
    • Handles the case where you need to talk to many outside VPCs
    • Doesn’t require VPC peering
    • Requires ENI on client VPC and a Network Load Balancer on the service VPC
  • VPC Flow Logs
    • Captures packet metadata only, not payload
      • Source, Destination, ports, protocol, etc
    • Can apply to either
      • All interfaces in VPC
      • Interfaces in a subnet
      • A single interface
    • Flow logs are not real time
    • Destination either S3 or Cloudwatch logs
    • If you see two records like
      • Internet source A, AWS dest B, ACCEPT
      • AWS source B, Internet dest A, REJECT
      • This could be caused by a security group allowing the inbound and response, but the NACL blocking the response
    • Some traffic not captured
      • 169.254.169.254, NTP, DHCP, Amazon DNS, AWS Windows license server
  • Sizing
    • Considerations
      • Size
        • How many subnets/AZs will you need
        • How many IPs total
      • What existing networks need to be planned around
        • Overlap with other network CIDRs is bad
      • Standard Cantrill structure
        • 3 AZs + 1 spare = 4 AZs
        • 3 tiers + 1 spare = 4 tiers
        • 4 x 4 = 16 subnets
        • If VPC size is /16, each of 16 subnets is /20
          • If VPC size is /n, each of 16 subnets is /n+4
      • Reserve 2+ networks per region per account
        • 4 accounts per region (Dev, Test, Prod + spare)

Web Application Firewall – WAF

  • Operates at layer 7
    • SQL inject, cross-site scripting, etc
    • HTTP/S filtering
  • Integrated into ALB, API Gateway or Cloudfront
    • Uses Web Access Control Lists (WEBACL)
  • Global perimeter protection
  • Returns 403 status if request denied by WAF
  • Can block certain countries, IP addresses, etc.
  • Check for header values, known malicious scripts
  • Set rate limits

X-Ray

  • Distributed tracing
  • X-Ray agent can assume a role to publish data into a different account

Distributed Systems links

pg-distrib-logoA few years ago, while researching Zookeeper for a project I was working on, I realized that there was a whole field Computer Science, Distributed Systems, that I was totally unfamiliar with. That started a journey of discovery that’s been very rewarding.   In response to a question on the Akka mailing list I put together a list of links to Distributed Systems resources.  I’ve been meaning to translate that email to a blog post for a while.

To start off I would definitely recommend checking out a talk that Jonas Boner from Typesafe gave at Strange Loop called The Road to Akka Cluster and Beyond (slides).

Implementation-oriented books that I would recommend for developers are:

These are all filled with practical advice for building real-world distributed systems.

One thing I found is that there is a big gap between academic and industry knowledge right now.  This is discussed in a post on Henry Robinson’s excellent Paper Trail blog where he provides a guide to digging deeper both on the academic side and by reading research papers written by industry leaders like Google, Yahoo, etc.   Definitely read the links in the “First Steps” section.  The gap is also the topic of a post on Marc Brooker’s blog and a post on Murat’s blog.  Besides papers he links to some other good people to follow like Aphyr and Peter Bailis.  Two blogs that review Distributed Systems papers are the Morning Paper and MetaData.  I also recommend following Brave New Geek, Ben Stopford and Kellabyte, and the Hacking, Distributed, High Scalability and Highly Scalable blogs.

Papers We Love is a collective of meetups across the world where people present their takes on research papers that they find fascinating.  Their web site has videos taken at these meetups.  They also hold yearly conferences.

YouTubers are also getting into the act – for example, Vivek Haldar has a series of videos called Read a Paper where he summarizes papers in around ten minutes.

Many times the conferences where the papers are presented also publish videos, slide decks and posters that are much easier to consume for a working developer.  If you have a paper that you are really interested in be sure and and check out the web site of the conference where the paper was published.  Usenix in particular is really good at this.  In addition in the last few years a number of research projects have been creating web sites to promote the research where you can find code, videos and more.  For example, check out the site for Hermes, a replication protocol.

Working to fill the gap between academia and industry:

Essential ACM Queue articles

Notable blog posts

Other reading lists

Online Courses

I recommend getting familiar with the CAP Theorem.  You’re going to run into it all over the place.

Zookeeper is a Consensus (or Coordination) system.  Consensus is a major topic in theoretical and practical distributed systems and is what got me started digging into distributed systems originally.  To start getting familiar with Consensus I recommend:

On the academic textbook side, I have these on my stack to read:

This is just the tip of the iceberg.  Besides consensus, other distributed systems topics that I’ve found interesting include distributed databases, group membership, gossip protocols (used in Akka, Cassandra and Consul), time and clocks, and peer-to-peer systems.

My first computer

I was looking at this old issue of Byte Magazine online talking about Smalltalk.

Lo and behold there was an ad for my first computer, an MTI TRS-80 Model III.

Byte_Magazine_Volume_06_Number_08_-_Smalltalk

It was $1,998 in 1981 dollars which would be $5,136 today.  That’s was a lot of money for my family and a ton of money to be spent on a 14-year old.  But well worth it!

Thanks Mom and Dad!

Adventures in Clustering – part 2

Embedding a Zookeeper Server

To minimize the number of moving parts in the message delivery system I wanted to embed the Zookeeper server in the application, rather than running a separate ensemble of Zookeeper servers.

The embedded Zookeeper server is encapsulated in an EmbeddedZookeeper trait that is mixed into the ZookeeperService class from part 1.   Here is EmbeddedZookeeper:

There are two kinds of Zookeeper servers, standalone and replicated.  Standalone (aka Single) servers, which are typically used for local development and testing, use ZookeeperServerMain to start up. Replicated (aka Clustered or Multi-server) servers, which are replicated for high availability, use QuorumPeerMain.  I extend both of these classes to add a stop() method in a common interface.

A couple other things to mention about EmbeddedZookeeper: it is self-typed to ClusterService and Logging to get access to the nodeId and log methods.  Also there is an abstract clientPort method that is implemented by ZookeeperService.

ZookeeperService Initialization

The embedded server is configured and started as part of the initialize() method in ZookeeperService:

In initialize() a list of Zookeeper server hostnames is passed to genServerNames() which generates a ServerNames (see below).   Then the Zookeeper server is configured and started.

Server Configuration

ServerNames contains hosts, a collection of cluster node IDs and serversKey and serversVal, which corresponds to the server.x=[hostname]:nnnnn[:nnnnn] configuration settings for clustered servers (each server needs to know about all of the other servers).  If there are multiple elements in ServerNames.hosts then useReplicated will be true which will tell configureServer() to configure a replicated server.

In configureServer() the first thing that happens is the Zookeeper server data directory is removed and recreated. I found that the Zookeeper server data directory could get corrupted if the server wasn’t shut down cleanly. Instead of trying to maintain the data directory across restarts I decided to just recreate the data directory each time the application started up. The downside is that the server’s database has to be populated on each startup (it synchronizes the data from other servers). In this particular use case it’s ok because there is very little data being stored in Zookeeper (just the Leader Election znode and children). The upside is that no manual intervention is needed to address corrupt data directories.

Replicated Zookeeper servers require a myid file that identifies the ordinal position of the server in the ensemble. To avoid having to create this file manually I create it here as part of server startup.

Finally the appropriate Zookeeper server object is instantiated and configured with a set of Properties and a server startup function is returned for use by startServer(). A reference to the server object is saved in zkServer for shutdown later.

Server Startup

To start the server a new thread is created and in that thread the server startup function returned by configureServer() is run. The main application thread waits on a semaphore for the server to start.

Notes

One of the limitations of running Zookeeper in replicated mode is that a quorum of servers has to be running in order for the ensemble to respond to requests. A quorum is N/2 + 1 of the servers. In practice this means that there should be an odd number of servers and there needs to be at least three servers in the cluster. If there are three servers in the cluster then two of the servers have to be up and running for the clustering functionality to work. Depending on how you deploy your application it might not be possible to always keep at least N/2 + 1 servers running. If that is the case then embedded Zookeeper won’t be an option for you.

Also, one of the things that I noticed when running embedded Zookeeper is that during deployment as application JVMs are bounced you will get a lot of error noise in the logs coming from Zookeeper server classes. This is expected behavior but it might cause concern with your operations team.

Coming Up

In the next post I will come back to the ClusterService and fix a problem where multiple nodes might think they are the leader at the same time.

Adventures in Clustering – part 1

Last year I added clustering support to a system I had previously developed for a client. The requirements were to implement automated failover to eliminate a single point of failure and to distribute certain kinds of work among members of the cluster.

The application I was changing is a message delivery system, but my client has other applications with the same needs so the solution needed to be general enough to apply elsewhere.

Automating Failover with Leader Election

The message delivery system runs a number of scheduled jobs. For example, one of the scheduled jobs checks the database to see which new messages need to be delivered, delivers them, and updates message statuses after the delivery is complete. The message delivery system runs on JVMs on multiple machines for availability, but only one JVM should run the scheduled job, otherwise duplicate messages will be sent.

Before clustering support was added one of the JVMs was designated as the Delivery Server using a Java system property and only the Delivery Server ran the scheduled jobs. If the Delivery Server went down, one of the other JVMs had to be manually converted into the Delivery Server. This was suboptimal because it depended on humans to be available to both notice the problem and perform the failover, and the failover process was error-prone.

There are a number of ways to solve the the problem with the specific use case I just described. But there were other use cases where manual failover was also the problem, both in the message delivery system and in other applications the client has. I didn’t want to have a use case-specific solution for each problem.

To solve the problem generally I decided to use Leader Election. With Leader Election cluster members decide among themselves which member is the leader and the leader is responsible for certain tasks. The message delivery system already had the concept of a leader – the Delivery Server. I just needed to automate the process of choosing that leader.

The ClusterService

To support the Leader Election and work distribution features, I introduced the concept of a cluster service. When the service is initialized it starts the leader election process for that cluster member. At any time it can be queried to see if the current node is the leader, and who the other members of the cluster are. Here is the ClusterService interface:

In ClusterStatus, current, leader and participants are node IDs.

Scheduled Jobs

Previously only the Delivery Server ran scheduled jobs.  With Leader Election all of the cluster members run the scheduled jobs, but those jobs were changed to return immediately if the current cluster member is not the leader.   For example:

The distributeEvents job runs every 30 seconds on all cluster members. It gets the cluster status from the ClusterService and if the current node is the leader it calls distribute() to do the actual work.

Work Distribution

In the message delivery system the ClusterSystem is also used for work distribution. The leader distributes certain pieces of work to all of the nodes in the cluster. The ClusterSystem is queried for the cluster members. The cluster member node ID is mapped to a remote Akka actor. For example:

I will cover the work distribution system in detail in a separate post.

Zookeeper and Curator

I wanted to leverage a third-party clustering system as developing core clustering functionality is non-trivial. My first choice was Akka Cluster but when I was adding clustering support to the message delivery system Akka Cluster had not been released (it was just a high level design doc at that time). Zookeeper, on the other hand, had been on the scene for for a while. The Zookeeper client has a reputation of being difficult to work with so I decided to use Curator, a library that abstracts over the Zookeeper Java client.

ZookeeperService

ZookeeperService is the Curator-based implementation of the ClusterService:

ZookeeperService also has an embedded Zookeeper server which will be covered in part 2.

Leader Election using Curator

Curator has two ways of doing Leader Election: the LeaderSelector and LeaderLatch.

I first implemented leader election using LeaderSelector but I was unsatisfied with how complicated the resulting code was. After some discussion with Jordan Zimmerman, Curator’s developer, I reimplemented leader election using LeaderLatch. Here’s what it looks like:

The call to watchLeaderChildren() is optional. It’s only needed if you want to be notified when the leader changes or if a cluster member falls out of the cluster. In the message delivery system that wasn’t strictly necessary because it always checks who the leader is before doing something that only the leader should do. But it’s a nice thing to have for monitoring purposes:

In watchLeaderChildren a watch is set on the children of the LeaderLatch znode. Each child represents a cluster member and the first znode in the list of children is the leader.  If the set of children changes the watch is fired and the process() method is called. In process() the cluster status is queried and the watch is set again (Zookeeper watches are disabled after they fire).

Cluster Status using Curator

ZookeeperService.clusterStatus looks like:

It is a straightforward query of the LeaderLatch to populate the ClusterStatus object.

Application Startup

ClusterService.initialize is called at application startup if ClusterService.enabled is true. Here is the ZookeeperService implementation:

CuratorFramework is the high level Curator API.  It wraps a Curator Client which wraps the Zookeeper client. After the CuratorFramework is created the startup code blocks until the client has connected to the server and is ready to start working. Then selectLeader() is called to start the leader election process. Once the leader election process is started, Curator handles everything else under the hood (for example if the leader dies, a new node joins the cluster, or if one of the Zookeeper servers goes down).

Coming Up

In the next posts in this series I will embed the Zookeeper server into the application, handle a corner case when leadership changes while the leader is working, discuss work distribution in detail and I will talk about a port of the clustering functionality to Akka Cluster. Stay tuned!

Happy Holidays!

An Auto-Updating Caching System – part 2

In the previous post we imagined that we needed to build a caching system in front of a slow backend system. The cache needed to meet the following requirements:

  • The data in the backend system is constantly being updated so the caches need to be updated every N minutes.
  • Requests to the backend system need to be throttled.

Akka actors looked like a good fit for the requirements. Each actor would handle a query for the backend system and cache the results. In part 1 I talked about the CachingSystem which created CacheActors and provided helper methods for working with the caches. In this post I will cover the CacheActor class hierarchy.

Here is the base CacheActor class:

abstract class CacheActor[V](cacheSystem: CacheSystem) extends Actor with Logging {
  implicit val execContext: ExecutionContext = context.dispatcher

  def findValueReceive: Receive = {
    case FindValue(params) => findValueForSender(params, sender)
  }

  def findValueForSender(params: Params, sender: ActorRef) {
    val key = params.cacheKey
    val elem = cache.get(key)

    if (elem != null) {
      sender ! elem.getObjectValue.asInstanceOf[V]
    } else {
      Future { findObject(params) }.onComplete {
        case Right(result) => result match {
          case Some(value) => sender ! value
          case None => sender ! Status.Failure(new Exception("findObject returned None for key=" + key + " cache=" + cache.getName))
        }
        case Left(ex) => sender ! Status.Failure(ex)
      }
    }
  }

  def findObject(params: Params): Option[V] = {
    cacheSystem.findObjectForCache(params.cacheKey, cache, 
                                   finder(params))
  }

  // Provided by subclasses
  val cache: Cache
  def finder(params: Params): () => V
}

object CacheActor {
  case class FindValue(params: Params)

  trait Params {
    def cacheKey: String
  }
}

Part 1 showed an example of a CachingBusinessService sending a FindValue message to a Service1CacheActor using the ? (ask) method.  findValueReceive handles FindValue by either returning a value from the cache or making a call to the backend (via CacheSystem.findObjectForCache) to get the value.

Concrete CacheActors are responsible for implementing finder which returns a function to query the backend system. The returned function is ultimately executed by CacheSystem.findObjectForCache.

Part 1 also showed CacheSystem sending UpdateCacheForNow messages to periodically update cache values. UpdateCacheForNow is handled by a subclass of CacheActorDateCacheActor:

abstract class DateCacheActor[V](cacheSystem: CacheSystem) 
    extends CacheActor[V](cacheSystem) {

  override def receive = findValueReceive orElse  {
    case UpdateCacheForNow => updateCacheForNow()

    case UpdateCacheForPreviousBusinessDay => updateCacheForPreviousBusinessDay()
  }

  def updateCacheForNow() {
    val activeBusinessDay: Range[Date] = DateUtil.calcActiveBusinessDay
    val start = activeBusinessDay.getStart
    val now = new Date

    // If today is a business day and now is within the business day, 
    // retrieve data from the backend and put in the cache
    if (now.getTime >= start.getTime && 
        now.getTime <= activeBusinessDay.getEnd.getTime)
      updateCacheForDate(now)
    } 
  }

  def updateCacheForPreviousBusinessDay() {
    updateCacheForDate(DateUtil.calcActiveBusinessDay.getStart)
  }

  def updateCacheForDate(date: Date) {
    import DateCacheActor._    // Use separate thread pool
    Future { findObject(new DateParams(date)) }
  }
}

object DateCacheActor {
  // Update cache for the current time
  case object UpdateCacheForNow  

  // Update cache for previous business day   
  case object UpdateCacheForPreviousBusinessDay

  // updateCacheForDate() uses a separate thread pool to prevent scheduled tasks 
  // from interfering with user requests
  val FUTURE_POOL_SIZE = 5
  val FUTURE_QUEUE_SIZE = 20000

  private lazy val ucfdThreadPoolExecutor = 
    new ThreadPoolExecutor(FUTURE_POOL_SIZE, FUTURE_POOL_SIZE, 1, TimeUnit.MINUTES, 
                           new ArrayBlockingQueue(FUTURE_QUEUE_SIZE, true))
  implicit lazy val ucfdExecutionContext: ExecutionContext = 
    ExecutionContext.fromExecutor(ucfdThreadPoolExecutor)
}

During non-business hours values UpdateCacheForNow messages are ignored and values from the previous business day are returned from the cache.  If the app is started during non-business hours an UpdateCacheForPreviousBusinessDay message is scheduled to populate cache values for the previous business day.

A separate thread pool is used to perform the backend system queries for the scheduled UpdateCacheFor* tasks.   We don’t want them to interfere with user requests which are handled using the regular actor thread pool.

Here is what a concrete DateCacheActor would look like, using the Service1CacheActor from part 1 as an example:

class Service1CacheActor(val cache: Cache, cacheSystem: CacheSystem, 
                         bizService: BusinessService) 
    extends DateCacheActor[JList[Service1Result]](cacheSystem) {

  override def receive = super.receive

  override def updateCacheForDate(date: Date) {
    import DateCacheActor._
    Future { findObject(new Service1Params(date, true)) }
    Future { findObject(new Service1Params(date, false)) }
  }

  def finder(params: Params) = { () =>
    params match {
      case p: Service1Params => bizService.service1(p.date, p.useFoo)
      case _ => throw new IllegalArgumentException("...") 
    }
  }
}

class Service1Params(date: Date, val useFoo: Boolean) extends DateParams(date) {
  override def cacheKey = super.cacheKey + ":" + useFoo
}

Service1CacheActor‘s implementation of updateCacheForDate finds and caches the results of the true and false variations of the BusinessService.service1 backend system call.

If we wanted to cache another one of BusinessService‘s methods using the auto-updating caching system we would:

  1. Subclass DateCacheActor, implement finder and potentially override updateCacheForDate.
  2. Subclass DateParams, providing the parameters to the backend query, and override the cacheKey method.
  3. Call createCacheActor again in CachingBusinessService to create the new DateCacheActor from #1, and write a cached version of the backend query method, sending FindValue to the new actor and waiting for the response.

An Auto-Updating Caching System – part 1

Imagine you needed to build a caching system in front of a slow backend system with the following requirements:

  • The data in the backend system is constantly being updated so the caches need to be updated every N minutes.
  • Requests to the backend system need to be throttled.

Here’s a possible solution taking advantage of Akka actors and Scala’s support for functions as first class objects.   The first piece of the puzzle is a CacheSystem which creates and queries EhCache caches:

class CacheSystem(name: String, updateIntervalMin: Int, cacheManager: CacheManager) {
  var caches = List.empty[Cache]
  val actorSystem = ActorSystem("cache_" + name)

  val DEFAULT_TTL_SEC = 86400   // 1 day

  def addCache(name: String, size: Int, ttlSeconds: Int = DEFAULT_TTL_SEC): Cache = {
    val config = new CacheConfiguration(name, size)
    config.setTimeToIdleSeconds(ttlSeconds)
    val cache = new Cache(config)
    cacheManager.addCache(cache)
    caches = cache :: caches
    cache
  }

  def createCacheActor(cacheName: String, cacheSize: Int, scheduleDelay: Duration, 
                       actorCreator: (Cache, CacheSystem) => Actor, 
                       ttlSeconds: Int = DEFAULT_TTL_SEC): ActorRef = {

    val cache = addCache(cacheName, cacheSize, ttlSeconds)
    val actor = actorSystem.actorOf(Props(actorCreator(cache, this)), 
                                    name = cacheName + "CacheActor")

    actorSystem.scheduler.schedule(scheduleDelay, updateIntervalMin minutes, 
                                   actor, UpdateCacheForNow)   
    if (!DateUtil.isNowInActiveBusinessDay) {
      actorSystem.scheduler.scheduleOnce(scheduleDelay, actor, 
                                         UpdateCacheForPreviousBusinessDay)
    }

    actor
  }

  def findCachedObject[T](key: String, cache: Cache, finder: () => T): Option[T] = {
    val element = cache.get(key)

    if (element == null) {
      findObjectForCache(key, cache, finder)
    } else {
      Some(element.getObjectValue.asInstanceOf[T])
    }
  }

  def findObjectForCache[T](key: String, cache: Cache, finder: () => T): Option[T] = {
    val domainObj = finder()
    if (domainObj != null) {
      val element = new Element(key, domainObj)
      cache.put(element)
      Some(domainObj)
    } else {
      None
    }
  }

  def clearAllCaches() {
    caches.foreach(_.removeAll())
  }
}

The createCacheActor method creates a cache and an actor, and schedules tasks to periodically update the cache. I decided to use actors for this because the actor system’s thread pool is a good way to meet the throttling requirement. In addition using the Akka API it is easy to have scheduled tasks send messages to actors.  createCacheActor takes a function of  (Cache, CacheSystem) => Actor to create the actor.   It then fills in those parameters to create the actor using the Akka actorOf method.

The findCachedObject and findObjectForCache methods take a finder function of () => T that handles the lookup of objects from the backend system.

Here is an example of the CacheSystem being used by the business logic layer:

class CachingBusinessService(bizService: BusinessService) extends BusinessService {
  implicit val timeout = Timeout(60 seconds)

  val service1CacheActor = 
    cacheSystem.createCacheActor("service1", DATE_CACHE_SIZE, 0 seconds, 
                                 new Service1CacheActor(_, _, bizService))
  // ... more actors created here

  def service1(date: Date, useFoo: Boolean): JList[Service1Result] = {
    val future = 
      service1CacheActor ? FindValue(new Service1Params(date, useFoo))
    Await.result(future, timeout.duration).asInstanceOf[JList[Service1Result]]
  }

  // ... more service methods
}

The CachingBusinessService is a caching implementation of the BusinessService interface. It creates CacheActors to service the requests.   To create the Service1CacheActor it passes a curried constructor to createCacheActor.

The caching implementation of service1 sends a FindValue message to the service1CacheActor, using the ? (ask) method which returns an Akka Future.  Then it waits for the result of the future and returns it to the caller.

Using Await.result should raise a red flag. You don’t want to block if you don’t have to (epsecially inside of an actor). However in this case the BusinessService is being called as part of a REST API served by a non-async HTTP server. Before the caching layer was introduced it would block waiting for the back end to to respond.

Here’s the code for the FindValue message and the Params that it contains.  Params are the parameters for the backend query. Each unique Params object corresponds to a cache entry so each Params subclass is responsible for generating the appropriate cache key.

object CacheActor {
  case class FindValue(params: Params)

  trait Params {
    def cacheKey: String
  }
}

In the next post I’ll describe the CacheActor class hierarchy.

Using Groovy Closures as Scala Functions

I have a Scala trait for persistence and transaction management (which I will blog about in more detail later). The trait looks like:

trait DomainManager {
  def get[E](id: Long)(implicit m: ClassManifest[E]): Option[E]

  def find[E](namedQuery: String, params: Map[String, Any] = null): Option[E]

  //...more methods

  def withTransaction[R](f: (TransactionStatus) => R,
            readOnly: Boolean = false,
            propagationBehavior: PropagationBehavior = PropagationRequired): R
}

Let’s take a look at withTransaction() specifically. It is called like:

val result = domainManager.withTransaction { txStatus =>
  // Access database here
}

If your application is written in Scala or Java it is sometimes handy to have certain pieces of it written in Groovy, to be able to easily change a class and reload it without restarting the application. Your Groovy code will be able to call any of your Java classes, but what about your Scala classes? For example, what if we want to call withTransaction() on DomainManager? How do we deal with the f parameter? And what about the default parameter values?

Groovy and Scala both have the concept of functions as first class objects. In Groovy they are called closures and they are implemented by the class groovy.lang.Closure. In Scala they are called functions and they are implemented by the traits scala.Function0, scala.Function1, ... scala.FunctionN, where N is the number of parameters to the function.

In withTransaction() the type of f is Function1[TransactionStatus, R].  It is a function that takes one parameter of type TransactionStatus and returns a generic R.   

Through Groovy magic closures can be coerced to arbitrary interfaces. For example, I wrapped the withTransaction() method in Groovy like this:

    def withTransaction(Closure closure) {
        domainManager.withTransaction(closure as Function1<TransactionStatus, Object>,
             false, PropagationRequired)
    }

Here the closure parameter is coerced (using the Groovy as coercion operator) to a Scala Function1[TransactionStatus, _] which is the proper type after the generic parameter R is erased.

You cannot use Scala’s default parameter values in Groovy or Java so the last two parameters (readOnly and propagationBehavior) need to be passed explicitly.

Now I can call my Groovy withTransaction() with a closure like:

  def result = withTransaction { txStatus ->
    // Access database here
  }

The same technique can be used to call Scala collection methods like List.map() with Groovy closures.

Templating XML data with Velocity

Velocity is an easy-to-use templating system for the JVM. It’s commonly used to code templates for web pages and email. To use Velocity you pass it a template (a string) and a context, which is a map of Javabeans and collections of Javabeans. The template is coded using the Velocity template language. Here is an example of a template (taken from the Velocity User Guide):

Hello $customer.Name!
<table>
#foreach( $mud in $mudsOnSpecial )
   #if ( $customer.hasPurchased($mud) )
      <tr>
        <td>
          $flogger.promo( $mud )
        </td>
      </tr>
   #end
#end
</table>

What if some of your data is not in Javabean form, but is free form XML (free form meaning you don’t have any control over what the structure is going to be)?

Static languages like Scala and Java are pretty limited for dealing with free form XML. You can parse it into a DOM-like tree or parse it using a SAX-like streaming parser. Then to make the data available to Velocity you would write a Velocity-compatible adapter for the chosen XML API.

Groovy has really nice XML handling capabilities. You can parse XML and then use the results using regular Groovy code, not ugly DOM walking. For example, given this XML:

    <records>
      <car name='HSV Maloo' make='Holden' year='2006>
        <country>Australia</country>
        <record type='speed'>Production Pickup Truck with speed of 271kph</record>
      </car>
      <car name='P50' make='Peel' year='1962'>
        <country>Isle of Man</country>
        <record type='size'>Smallest Street-Legal Car at 99cm wide and 59 kg</record>
      </car>
      <car name='Royale' make='Bugatti' year='1931'>
        <country>France</country>
        <record type='price'>Most Valuable Car at $15 million</record>
      </car>
    </records>

You can parse and use it like this in Groovy:

def records = new XmlSlurper().parseText(xml)

def allRecords = records.car
assert 3 == allRecords.size()
def allNodes = records.depthFirst().collect{ it }
assert 10 == allNodes.size()
def firstRecord = records.car[0]
assert 'car' == firstRecord.name()
assert 'Holden' == firstRecord.@make.text()
assert 'Australia' == firstRecord.country.text()
def carsWith_e_InMake = 
  records.car.findAll{ it.@make.text().contains('e') }

Using the Groovy API you can write a Velocity adapter for free form XML that exposes most of the power of the native Groovy language features. The Groovy API is just another set of classes that you can use in any JVM application. You can use the API without using the Groovy language itself.

Here is the adapter code in Scala. It wraps Groovy GNodes and GNodeLists with objects that are compatible with Velocity:

import groovy.util.{XmlParser, Node => GNode, NodeList => GNodeList}

object GNodeWrapper
  def xmlToGNode(xml: String) = 
    GNodeWrapper(new XmlParser().parseText(xml))

  def wrapGNodes(n: Any): AnyRef = n match {
    case list: GNodeList => GNodeListWrapper(list)
    case node: GNode => GNodeWrapper(node)
    case x @ _ => x.asInstanceOf[AnyRef]
  }
}
import GNodeWrapper._

case class GNodeWrapper(node: GNode) {
  def get(key: String) = {
    val gnode = node.get(key)
    gnode match {
      case list: GNodeList if list.size == 1 =>
        val n = list.get(0).asInstanceOf[GNode]
        if (n.children.size == 1) {
          n.children.get(0) match {
            case _: GNode => wrapGNodes(n)
            case x @ _ => x
          }
        } else {
          wrapGNodes(n)
        }
      case x @ _ => wrapGNodes(x)
    }
  }

  override def toString: String = node.text
}

case class GNodeListWrapper(nodeList: GNodeList) {
  def get(key: String) = wrapGNodes(nodeList.getAt(key))

  def get(index: Int) = wrapGNodes(nodeList.get(index))

  def size = nodeList.size

  def isEmpty = nodeList.isEmpty

  def iterator = GNodeListIterator(nodeList.iterator)

  override def toString: String = {
    if (nodeList.size == 0) ""
    else
      nodeList.get(0) match {
        case node: GNode => node.text
        case x @ _ => x.toString
      }
  }
}

case class GNodeListIterator(iter: java.util.Iterator[_]) 
      extends java.util.Iterator[AnyRef] {
  def hasNext = iter.hasNext

  def next = wrapGNodes(iter.next)

  def remove() = iter.remove()
}

That’s not much code, especially compared to what the Java/DOM equivalent would be.

Using the adapter looks like:

  // xml is a string containing the sample XML from above
  val contextData =
      Map("title" -> "test title",
          "content" -> "test content",
          "meta" -> GNodeWrapper.xmlToGNode(xml))

  // Renders template and contextData to stringWriter
  velocityEngine.evaluate(new VelocityContext(contextData.asJava), 
                            stringWriter, "example", template)
  

So a template that looks like this:

title=$title
content=$content
country=$meta.records.car[0].country
year=$meta.records.car[2].get('@year')
numcars=$meta.records.car.size()
names=#foreach($c in $meta.records.car)$c.get('@name') #end

Would render like this:

title=test title
content=test content
country=Australia 
year=1931
numcars=3
names=HSV Maloo P50 Royale

The same technique could be used to render free form XML in another templating system like JSP.