, 554 tweets, 54 min read Read on Twitter
Preparing for AWS exams?
Read this thread:
The user can configure SQS, which will decouple the call between the EC2 application and S3. Thus, the application does not keep waiting for S3 to provide the data.
1/n
Auto Scaling enables you to follow the demand curve for your applications closely, reducing the need to manually provision Amazon EC2 capacity in advance.
2/n
For example, you can set a condition to add new Amazon EC2 instances in increments when the average utilisation of your Amazon EC2 fleet is high; and similarly, you can set a condition to remove instances in the same increments when CPU utilisation is low.
3/n
If you have predictable load changes, you can set a schedule through Auto Scaling to plan your scaling activities.
4/n
You can use Amazon CloudWatch to send alarms to trigger scaling activities and Elastic Load Balancing to help distribute traffic to your instances within Auto Scaling groups. Auto Scaling enables you to run your Amazon EC2 fleet at optimal utilisation.
5/n
Amazon S3 provides four different access control mechanisms: AWS Identity and Access Management (IAM) policies, Access Control Lists (ACLs), bucket policies, and query string authentication.
6/n
Amazon S3 bucket policies can be used to add or deny permissions across some or all of the objects within a single bucket.
7/n
With Query string authentication, you have the ability to share Amazon S3 objects through URLs that are valid for a specified period of time.
8/n
The user can get notifications using SNS if he has configured the notifications while creating the Auto Scaling group.
9/n
In Elastic Load Balancing a health configuration uses information such as protocol, ping port, ping path (URL), response timeout period, and health check interval to determine the health state of the instances registered with the load balancer.
10/n
Currently, HTTP on port 80 is the default health check.Security groups—Act as a firewall for associated Amazon EC2 instances, controlling both inbound and outbound traffic at the instance level. 
11/n
Security groups are stateful: (Return traffic is automatically allowed, regardless of any rules)
12/n
Network access control lists (ACLs)—Act as a firewall for associated subnets, controlling both inbound and outbound traffic at the subnet level. Network ACLs are stateless: (Return traffic must be explicitly allowed by rules).
13/n
Amazon Glacier supports various vault operations.
14/n
A vault inventory refers to the list of archives in a vault.
15/n
Downloading a vault inventory is an asynchronous operation. 
16/n
Given the asynchronous nature of the job, you can use Amazon Simple Notification Service (Amazon SNS) notifications to notify you when the job completes
17/n
Amazon Glacier prepares an inventory for each vault periodically, every 24 hours. If there have been no archive additions or deletions to the vault since the last inventory, the inventory date is not updated.
18/n
Within Amazon EC2, when using a Linux instance, the device name /dev/sda1 is reserved for the root device. Another device name, /dev/xvda, is also reserved for certain Linux root devices.
19/n
Each Amazon EBS Snapshot has a createVolumePermission attribute that you can set to one or more AWS Account IDs to share the AMI with those AWS Accounts.
20/n
To allow several AWS Accounts to use a particular EBS snapshot, you can use the snapshots's createVolumePermission attribute to include a list of the accounts that can use it.
21/n
A Classic Load Balancer routes each request independently to the registered instance with the smallest load. However, you can use sticky session feature (also known as session affinity), which enables the load balancer to bind a user's session to a specific instance.
22/n
This ensures that all requests from the user during the session are sent to the same instance.
23/n
EMR monitoring choices:Hadoop Web Interfaces
24/n
Every cluster publishes a set of web interfaces on the master node that contain information about the cluster. You can access these web pages by using an SSH tunnel to connect them on the master node.
25/n
CloudWatch Metrics: Every cluster reports metrics to CloudWatch. CloudWatch is a web service that tracks metrics, and which you can use to set alarms on those metrics.
26/n
Ganglia: Ganglia is a cluster monitoring tool. To have this available, you have to install Ganglia on the cluster when you launch it. After you've done so, you can monitor the cluster as it runs by using SSH tunnel to connect to the Ganglia UI running on master node.
27/n
The Multi AZ feature allows the user to achieve High Availability. For Multi AZ, Amazon RDS automatically provisions and maintains a synchronous “standby” replica in a different Availability Zone.
28/n
By default, all accounts are limited to 5 Elastic IP addresses per region. If you need more than 5 Elastic IP addresses, AWS asks that you apply for your limit to be raised.
29/n
When the user account has reached the maximum number of EC2 instances, it will not be allowed to launch an instance. AWS will throw an ‘InstanceLimitExceeded’ error.
30/n
For all other reasons, such as “AMI is missing part”, “Corrupt Snapshot” or ”Volume limit has reached” it will launch an EC2 instance and then terminate it.
31/n
A VPC security group controls access to DB instances and EC2 instances inside a VPC. Amazon RDS uses VPC security groups only for DB instances launched by recently created AWS accounts.
32/n
CloudFormation: If any of the services fails to launch, CloudFormation will rollback all the changes and terminate or delete all the created services.
33/n
When modifying EBS snapshot permissions with AWS Console, one of the options is to make the snapshot public or not. However, snapshots with AWS Marketplace product codes CANNOT be made public.
34/n
Amazon EBS replication is stored within the same availability zone, not across multiple zones; therefore, it is highly recommended that you conduct regular snapshots to Amazon S3 for long-term data durability.
35/n
For customers who have architected complex transactional databases using EBS, it is recommended that backups to Amazon S3 be performed through the database management system so that distributed transactions and logs can be checkpointed.
36/n
AWS Import/Export supports: Import to Amazon S3Import to Amazon EBSImport to Amazon GlacierExport from Amazon S3 (only)
37/n
When you enable connection draining, you can specify a maximum time for the load balancer to keep connections alive before reporting the instance as deregistered. The maximum timeout value can be set between 1 and 3,600 seconds (the default is 300 seconds).
38/n
Amazon SNS makes it simple and cost-effective to push to mobile devices, such as iPhone, iPad, Android, Kindle Fire, and internet connected smart devices, as well as pushing to other distributed services.
39/n
In relation to AWS CloudHSM, High-availability (HA) recovery is hands-off resumption by failed HA group members. 
40/n
Prior to the introduction of this function, the HA feature provided redundancy and performance, but required that a failed/lost group member be manually reinstated.
41/n
AWS generates separate unique encryption keys for each Amazon Glacier archive, and encrypts it using AES-256. The encryption key then encrypts itself using AES-256 with a master key that is stored in a secure location.
42/n
Instances that you launch into a default subnet receive both a public IP address and a private IP address.
43/n
Instances that you launch into a non default subnet in a default VPC don’t receive a public IP address or a DNS hostname. You can change your subnet's default public IP addressing behaviour.
44/n
When you create or modify your DB Instance to run as a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous “standby” replica in a different Availability Zone. 
45/n
Updates to your DB Instance are synchronously replicated across Availability Zones to the standby in order to keep both in sync and protect your latest database updates against DB Instance failure. 
46/n
To determine your instance’s public IP address from within the instance, you can use instance metadata at http://169.254.169.254/latest/meta-data/
47/n
You can't attach an EBS volume to multiple EC2 instances. This is because it is equivalent to using a single hard drive with many computers at the same time.
48/n
RAID 5 and RAID 6 are not recommended for Amazon EBS because the parity write operations of these RAID modes consume some of the IOPS available to your volumes.
49/n
New Amazon SES users start in the Amazon SES sandbox, which is a test environment that has a sending quota of 1,000 emails per 24-hour period, at a maximum rate of 1 email per second..
50/n
SES Sending limits are based on recipients rather than on messages. 
51/n
Every Amazon SES sender has a unique set of sending limits, which are calculated by Amazon SES on an ongoing basis.
52/n
The Elastic Load Balancer connection draining feature causes the load balancer to stop sending new requests to the back-end instances when the instances are deregistering or become unhealthy, while ensuring that in-flight requests continue to be served.
53/n
Max connection draining time is 1 hour (3600 seconds).
54/n
Resource-based permissions are supported by Amazon S3, Amazon SNS, Amazon SQS, Amazon Glacier.
55/n
Amazon DynamoDB integrates with AWS Identity and Access Management (IAM).
56/n
You can use AWS IAM to grant access to Amazon DynamoDB resources and API actions. To do this, you first write an AWS IAM policy, which is a document that explicitly lists the permissions you want to grant. You then attach that policy to an AWS IAM user or role.
57/n
Every CloudFront web distribution must be associated either with the default CloudFront certificate or with a custom SSL certificate.
58/n
Before you can delete an SSL certificate, you need to either rotate SSL certificates (replace the current custom SSL certificate with another custom SSL certificate) or revert from using a custom SSL certificate to using the default CloudFront certificate.
59/n
You can't use IAM to control access to CloudWatch data for specific resources.
60/n
FGAC can benefit any application that tracks information in a DynamoDB table, where the end user (or application client acting on behalf of an end user) wants to read or modify the table directly, without a middle-tier service.
61/n
The core components of DynamoDB are "Table", a collection of Items"Items", with Keys and one or more Attribute;"Attribute", with Name and Value.
62/n
The AWS CloudHSM service defines a resource known as a high-availability (HA) partition group, which is a virtual partition that represents a group of partitions, typically distributed between several physical HSMs for high-availability. 
63/n
You are billed per-second with a one-minute minimum for On-Demand, Spot and Reserved instances as long as your EC2 instance is in a running state, provided the instance is Linux (with some exceptions).
64/n
For a Windows operating system, the instance is billed per-hour.
65/n
Virtual tape shelf is backed by Amazon Glacier whereas virtual tape library is backed by Amazon S3.
66/n
Amazon CloudFront billing is mainly affected by Data Transfer Out, Edge Location Traffic Distribution, Invalidation Requests, HTTP/HTTPS Requests, Dedicated IP, SSL Certificates.
67/n
You can create an Auto Scaling group directly from an EC2 instance. When you use this feature, Auto Scaling automatically creates a launch configuration for you as well.
68/n
In AWS CloudHSM, you can perform a remote backup/restore of a Luna SA partition if you have purchased a Luna Backup HSM. 
69/n
A VPC can span several Availability Zones. In contrast, a subnet must reside within a single Availability Zone.
70/n
You grant AWS Lambda permission to access a DynamoDB Stream using an IAM role known as the “execution role”. 
71/n
You can assign tags only to resources that already exist. You can't terminate, stop, or delete a resource based solely on its tags; you must specify the resource identifier.
72/n
The different cluster states of an Amazon EMR cluster are listed below.
73/n
STARTING – The cluster provisions, starts, and configures EC2 instances.
74/n
BOOTSTRAPPING – Bootstrap actions are being executed on the cluster.
75/n
RUNNING – A step for the cluster is currently being run.
76/n
WAITING – The cluster is currently active, but has no steps to run.
77/n
TERMINATING - The cluster is in the process of shutting down.
78/n
TERMINATED - The cluster was shut down without error.
79/n
TERMINATED_WITH_ERRORS - The cluster was shut down with errors.When you create a snapshot of a Throughput Optimised HDD (st1) or Cold HDD (sc1) volume, performance may drop as far as the volume's baseline value while the snapshot is in progress.
80/n
Bucket names must be globally unique, regardless of the AWS region in which you create the bucket, and they must be DNS-compliant.
81/n
Bucket names must be at least 3 and no more than 63 characters long.Bucket names can contain lowercase letters, numbers, periods, and/or hyphens. Each label must start and end with a lowercase letter or a number.
82/n
Bucket names must not be formatted as an IP address (e.g., 192.168.1.1).
83/n
The URL of any S3 object follows this template: https://s3-<region>.amazonaws.com/<bucket-name>/<object-path><object-name>
84/n
DDOS attack: The attack surface is composed of the different Internet entry points that allow access to your application.
85/n
The strategy to minimise the attack surface area is to
86/n
Reduce the number of necessary Internet entry points,
87/n
Eliminate non-critical Internet entry points,
88/n
Separate end user traffic from management traffic,
89/n
Obfuscate necessary Internet entry points to the level that untrusted end users cannot access them,
90/n
and Decouple Internet entry points to minimise the effects of attacks.
91/n
This strategy can be accomplished with Amazon VPC.
92/n
Amazon RDS read replicas provide enhanced performance and durability for Amazon RDS instances.
93/n
This replication feature makes it easy to scale out elastically beyond the capacity constraints of a single Amazon RDS instance for read-heavy database workloads.
94/n
You can create one or more replicas of a given source Amazon RDS instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput.
95/n
AWS Instance profile is a container for an AWS Identity and Access Management (IAM) role that you can use to pass role information to an Amazon EC2 instance when the instance starts.
96/n
The IAM role should have a policy attached that only allows access to the AWS Cloud services necessary to perform its function.
97/n
To create an Availability Zone-independent architecture, create a NAT gateway in each Availability Zone and configure your routing to ensure that resources use the NAT gateway in the same Availability Zone.
98/n
DynamoDB: You can create a maximum of 5 global secondary indexes per table.
99/n
Elastic Load Balancing supports the Server Order Preference option for negotiating connections between a client and a load balancer.
100/n
During the SSL connection negotiation process, the client and the load balancer present a list of ciphers and protocols that they each support, in order of preference.
101/n
By default, the first cipher on the client’s list that matches any one of the load balancer’s ciphers is selected for the SSL connection.
102/n
If the load balancer is configured to support Server Order Preference, then the load balancer selects the first cipher in its list that is in the client’s list of ciphers.
103/n
This ensures that the load balancer determines which cipher is used for SSL connection. If you do not enable Server Order Preference, the order of ciphers presented by the client is used to negotiate connections between the client and the load balancer.
104/n
Amazon WorkSpaces uses PCoIP, which provides an interactive video stream without transmitting actual data.
105/n
Architect for high availability: Distributing applications across multiple Availability Zones provides the ability to remain resilient in the face of most failure modes, including natural disasters or system failures.
106/n
Amazon DynamoDB does not have a server-side feature to encrypt items within a table.
107/n
You need to use a solution outside of DynamoDB such as a client-side library to encrypt items before storing them, or a key management service like AWS Key Management Service to manage keys that are used to encrypt items before storing them in DynamoDB.
108/n
Amazon EC2 roles must be assigned a policy. 
109/n
Integration of Role with Active Directory involves integration between Active Directory and IAM via SAML.
110/n
DynamoDB: You can have multiple local secondary indexes, and they must be created at the same time the table is created. You can create multiple global secondary indexes associated with a table at any time.
111/n
The Auto Scaling cool-down period is a configurable setting for your Auto Scaling group that helps ensure that Auto Scaling doesn’t launch or terminate additional instances before the previous scaling activity takes effect.
112/n
After the Auto Scaling group dynamically scales using a simple scaling policy, Auto Scaling waits for the cool-down period to complete before resuming scaling activities.
113/n
Amazon DynamoDB supports Query operations that require an input value and Scan operations that do not require an input value when retrieving data from a table. #DynamoDB
114/n
Data is copied asynchronously from the source database to the Read Replica. #RDS
115/n
The supported notification protocols for SNS are HTTP, HTTPS, Amazon SQS, Email, Short Message Service (SMS), and AWS Lambda. #SNS
116/n
Delay queues make messages unavailable upon arrival to the queue and visibility timeouts make messages unavailable after being retrieved from the queue. #SQS
117/n
You must provide the receipt handle for the message in order to delete it from a queue. #SQS
118/n
VPC subnets do not span availability zones. One subnet equals one availability zone. #VPC
119/n
The bursty nature of the IO (~3000 IOPS) makes the General Purpose SSD the more cost-effective choice. #IO #EBS
120/n
The instance type defines the virtual hardware (CPU, Memory) allocated to the instance. #EC2
121/n
Identity federation is based on temporary security tokens. Access cannot be granted directly to external identities, nor can they be added to IAM groups. #IAM
122/n
Amazon S3, Amazon Redshift, and Amazon Elastic search Service are the three possible destinations for Amazon Firehose data. #KinesisFirehose
123/n
Amazon EMR is an excellent choice to analyze data at read, such as logs stored on Amazon S3. #AmazonEMR
124/n
Amazon Kinesis Streams can analyze streams of data in real time. #KinesisStreams
125/n
Gateway Stored Volumes replicate all data to the cloud asynchronously while storing it all locally to reduce latency. #StorageGateway
126/n
Gateway Cached Volumes provide an iSCSI interface to block storage, but copies all files to Amazon S3 for durability and retains only the recently used files locally.  #StorageGateway
127/n
Amazon Cloud Front can use any HTTP/S source as an origin, whether on AWS or on-premises. #Cloudfront
128/n
AWS Config tracks the configuration of your AWS infrastructure and does not monitor its health. #AWSConfig
129/n
Amazon Glacier objects are retrieved online (not shipped) and are available in three to five hours. #Glacier
130/n
You must enable versioning before you can enable cross-region replication, and Amazon S3 must have IAM permissions to perform the replication. #S3
131/n
Lifecycle rules migrate data from one storage class to another, not from one bucket to another. #S3
132/n
Amazon CloudWatch metric data is kept for 2 weeks. #CloudWatch
133/n
Query is the most efficient operation to find a single item in a large table. #DynamoDB
134/n
Amazon SES can also be used to receive messages and deliver them to an Amazon S3 bucket, call custom code via an AWS Lambda function, or publish notifications to Amazon SNS. #SES
135/n
Resources aren’t replicated across regions unless organizations choose to do so. #General
136/n
In Amazon S3, you GET an object or PUT an object, operating on the whole object at once, instead of incrementally updating portions of the object as you would with a file. #S3
137/n
You can’t “mount” a bucket, “open” an object, install an operating system on Amazon S3, or run a database on S3. #S3
138/n
Amazon Elastic File System (AWS EFS) provides network-attached shared file storage (NAS storage) using the NFS v4 protocol.
139/n
S3 bucket objects - User metadata is optional, and it can only be specified at the time an object is created.
140/n
The combination of bucket, key, and optional version ID uniquely identifies an Amazon S3 object.
141/n
For PUTs to existing objects (object overwrite to an existing key) and for object DELETEs, Amazon S3 provides eventual consistency.
142/n
for PUTs to existing objects (object overwrite to an existing key) and for object DELETEs, Amazon S3 provides eventual consistency.
143/n
Amazon S3 bucket policies are the recommended access control mechanism for Amazon S3 and provide much finer-grained control.
144/n
Versioning is turned on at the bucket level. Once enabled, versioning cannot be removed from a bucket; it can only be suspended.
145/n
MFA Delete can only be enabled by the root account
146/n
Multipart upload is a three-step process: initiation, uploading the parts, and completion (or abort).
147/n
Cross-region replication is a feature of Amazon S3 that allows you to asynchronously replicate all new objects in the source bucket in one AWS region to a target bucket in another region.
148/n
To enable cross-region replication, versioning must be turned on for both source and destination buckets, and you must use an IAM policy to give Amazon S3 permission to replicate objects on your behalf.
149/n
In Amazon Glacier, data is stored in archives. An archive can contain up to 40TB of data, and you can have an unlimited number of archives.
150/n
Glacier Vaults are containers for archives. Each AWS account can have up to 1,000 vaults.
151/n
Amazon Glacier supports 40TB archives versus 5TB objects in Amazon S3.
152/n
Archives in Amazon Glacier are identified by system-generated archive IDs, while Amazon S3 lets you use “friendly” key names.
153/n
Amazon Glacier archives are automatically encrypted, while encryption at rest is optional in Amazon S3.
154/n
Versioning and MFA Delete can be used to protect against accidental deletion in S3 buckets.
155/n
Multipart upload can be used to upload large objects, and Range GETs can be used to download portions of an Amazon S3 object or Amazon Glacier archive.
156/n
Amazon S3 event notifications can be used to send an Amazon SQS or Amazon SNS message or to trigger an AWS Lambda function when an object is created or deleted.
157/n
Amazon Glacier vaults can be locked for compliance purposes.
158/n
S3 - Use ACLs Amazon S3 bucket policies and AWS IAM policies for access control.
159/n
CloudFront - Use pre-signed URLs for time-limited download access.
160/n
RRS offers lower durability at lower cost for easily replicated data.
161/n
Lifecycle configuration rules define actions to transition objects from one storage class to another based on time.
162/n
Data is stored in encrypted archives that can be as large as 40TB.
163/n
Enhanced networking is available only for instances launched in an Amazon Virtual Private Cloud 
164/n
The Amazon Machine Image (AMI) defines the initial software that will be on an instance when it is launched.
165/n
Security groups allow you to control traffic based on port, protocol, and source.
166/n
EC2-Classic Security Groups Control outgoing instance traffic 
167/n
VPC Security Groups Control outgoing and incoming instance traffic 
168/n
Every instance must have at least one security group but can have more.
169/n
A security group is default deny; that is, it does not allow any traffic that is not explicitly allowed by a security group rule.
170/n
A security group is a stateful firewall; that is, an outgoing message is remembered so that the response is allowed through the security group without an explicit inbound rule being required.
171/n
Security groups are applied at the instance level, as opposed to a traditional on-premises firewall that protects at the perimeter.
172/n
UserData is stored with the instance and is not encrypted, so it is important to not include any secrets such as passwords or keys in the UserData.
173/n
Outside of an Amazon VPC (called EC2-Classic), the association of the security, groups cannot be changed after launch.
174/n
In order to prevent termination via the AWS Management Console, CLI, or API, termination protection can be enabled for an instance.
175/n
While enabled, calls to terminate the instance will fail until termination protection is disabled. It does not prevent termination triggered by an OS shutdown command, termination from an Auto Scaling group, or termination of a Spot Instance due to Spot price changes.
176/n
Amazon VPC is the networking layer for Amazon Elastic Compute Cloud (Amazon EC2),
177/n
Default Amazon VPCs contain one public subnet in every Availability Zone within the region, with a netmask of /20.
178/n
Each route table contains a default route called the local route, which enables communication within the Amazon VPC, and this route cannot be modified or removed.
179/n
An Amazon VPC may have multiple peering connections, and peering is a one-to-one relationship between Amazon VPCs, meaning two Amazon VPCs cannot have two peering agreements between them.
180/n
SG: You can specify allow rules, but not deny rules. This is an important difference between security groups and ACLs.
181/n
Instances associated with the same security group can’t talk to each other unless you add rules allowing it (with the exception being the default security group).
182/n
You can change the security groups with which an instance is associated after launch, and the changes will take effect immediately.
183/n
A network access control list (ACL) is another layer of security that acts as a stateless firewall on a subnet level.
184/n
A network ACL is a numbered list of rules that AWS evaluates in order, starting with the lowest numbered rule
185/n
Every subnet must be associated with a network ACL.
186/n
For common use cases, AWS recommends that you use a NAT gateway instead of a NAT instance.
187/n
The VPG is the AWS end of the VPN tunnel. The CGW is a hardware or software application on the customer’s side of the VPN tunnel.
188/n
You must initiate the VPN tunnel from the CGW to the VPG.
189/n
A public subnet is one in which the associated route table directs the subnet’s traffic to the Amazon VPC’s IGW.
190/n
A private subnet is one in which the associated route table does not direct the subnet’s traffic to the Amazon VPC’s IGW.
191/n
A VPN-only subnet is one in which the associated route table directs the subnet’s traffic to the Amazon VPC’s VPG and does not have a route to the IGW.
192/n
An IGW provides a target in your Amazon VPC route tables for Internet-routable traffic, and it performs network address translation for instances that have been assigned public IP addresses.
193/n
In order for you to assign your own domain name to your instances, you create a custom DHCP option set and assign it to your Amazon VPC.
194/n
An Amazon VPC endpoint enables you to create a private connection between your Amazon VPC and another AWS service without requiring access over the Internet or through a NAT instance, VPN connection, or AWS Direct Connect. 
195/n
Endpoints support services within the region only.
196/n
You can create an Amazon VPC peering connection between your own Amazon VPCs or with an Amazon VPC in another AWS account within a single region.
197/n
A NAT instance is a customer-managed instance.
198/n
A NAT gateway is an AWS-managed service 
199/n
Transitive peering is not supported, and peering is only available between Amazon VPCs within the same region.
200/n
The VPN connection must be initiated from the CGW side, and the connection consists of two IPSec tunnels.
201/n
Amazon CloudWatch is a service that monitors AWS Cloud resources and applications running on AWS.
202/n
It collects and tracks metrics, collects and monitors log files, and sets alarms. Amazon CloudWatch has a basic level of monitoring for no cost and a more detailed level of monitoring for an additional cost.
203/n
Elastic Load Balancing supports routing and load balancing of Hypertext Transfer Protocol (HTTP), Hypertext Transfer Protocol Secure (HTTPS), Transmission Control Protocol (TCP), and Secure Sockets Layer (SSL) traffic to Amazon EC2 instances.
204/n
Elastic Load Balancing is a managed service, it scales in and out automatically to meet the demands of increased application traffic and is highly available within a region.
205/n
Elastic Load Balancing also supports integrated certificate management and SSL termination.
206/n
Elastic Load Balancing in Amazon VPC supports IPv4 addresses only.
207/n
Elastic Load Balancing in EC2-Classic supports both IPv4 and IPv6 addresses.
208/n
You can use internal load balancers to route traffic to your Amazon EC2 instances in VPCs with private subnets.
209/n
In order to use SSL, you must install an SSL certificate on the load balancer that it uses to terminate the connection and then decrypt requests from clients before sending requests to the back-end Amazon EC2 instances.
210/n
You can optionally choose to enable authentication on your back-end instances.
211/n
Elastic Load Balancing does not support Server Name Indication (SNI) on your load balancer.
212/n
If you want to host multiple websites on a fleet of Amazon EC2 instances behind Elastic Load Balancing with a single SSL certificate, you will need to add a Subject Alternative Name (SAN) for each website to the certificate 
213/n
Every load balancer must have one or more listeners configured.
214/n
Every listener is configured with a protocol and a port (client to load balancer) for a front-end connection and a protocol and a port for the back-end (load balancer to Amazon EC2 instance) connection.
215/n
Elastic Load Balancing supports the following protocols: HTTP HTTPS TCP SSL 
216/n
Elastic Load Balancing supports protocols operating at two different Open System Interconnection (OSI) layers. Layer 4 & 7.
217/n
Elastic Load Balancing allows you to configure many aspects of the load balancer, including idle connection timeout, cross-zone load balancing, connection draining, proxy protocol, sticky sessions, and health checks.
218/n
For each request that a client makes through a load balancer, the load balancer maintains two connections. One connection is with the client and the other connection is to the back-end instance.
219/n
By default, Elastic Load Balancing sets the idle timeout to 60 seconds for both connections.
220/n
Keep-alive, when enabled, allows the load balancer to reuse connections to your back-end instance, which reduces CPU utilization.
221/n
To ensure that the load balancer is responsible for closing the connections to your back-end instance, make sure that the value you set for the keep-alive time is greater than the idle timeout setting on your load balancer.
222/n
You should enable connection draining to ensure that the load balancer stops sending requests to instances that are deregistering or unhealthy, while keeping the existing connections open.
223/n
This enables the load balancer to complete in-flight requests made to these instances.
224/n
Draining timeout between 1 and 3,600 seconds (the default is 300 seconds). When the maximum time limit is reached, the load balancer forcibly closes connections to the deregistering instance.
225/n
If you enable Proxy Protocol, a human-readable header is added to the request header with connection information such as the source IP address, destination IP address, and port numbers. The header is then sent to the back-end instance as part of the request.
226/n
Sticky session feature (also known as session affinity) enables the load balancer to bind a user’s session to a specific instance.
227/n
Elastic Load Balancing creates a cookie named AWSELB that is used to map the session to the instance.
228/n
A health check is a ping, a connection attempt, or a page that is checked periodically.
229/n
Amazon CloudWatch supports multiple types of actions such as sending a notification to an Amazon Simple Notification Service (Amazon SNS) topic or executing an Auto Scaling policy.
230/n
Amazon CloudWatch supports an API that allows programs and scripts to PUT metrics into Amazon CloudWatch as name-value pairs that can then be used to create events and trigger alarms in the same manner as the default Amazon CloudWatch metrics.
231/n
A CloudWatch Logs agent is available that provides an automated way to send log data to CloudWatch Logs for Amazon EC2 instances running Amazon Linux or Ubuntu. 
232/n
You can use the Amazon CloudWatch Logs agent installer on an existing Amazon EC2 instance to install and configure the CloudWatch Logs agent.
233/n
Each AWS account is limited to 5,000 alarms per AWS account, and metrics data is retained for two weeks by default.
234/n
Auto scaling types:
235/n
Maintain Current Instance Levels 
236/n
Manual Scaling 
237/n
Scheduled scaling means that scaling actions are performed automatically as a function of time and date.
238/n
Dynamic scaling lets you define parameters that control the Auto Scaling process in a scaling policy.
239/n
Auto Scaling has several components that need to be configured to work properly: a launch configuration, an Auto Scaling group, and an optional scaling policy.
240/n
A launch configuration is the template that Auto Scaling uses to create new instances, and it is composed of the configuration name, Amazon Machine Image (AMI), Amazon EC2 instance type, security group, and instance key pair.
241/n
The default limit for launch configurations is 100 per region. If you exceed this limit, the call to create-launch-configuration will fail.
242/n
AWS autoscaling describe-account-limits. API call to find out a/c limits
243/n
Auto Scaling may cause you to reach limits of other services, such as the default number of Amazon EC2 instances you can currently launch within a region, which is 20.
244/n
A launch configuration can reference On-Demand Instances or Spot Instances, but not both. (— spot-price “0.15”)
245/n
Auto Scaling protects instances from termination during scale-in events.
246/n
This means that Auto Scaling instance protection will receive the CloudWatch trigger to delete instances, and delete instances in the Auto Scaling group that do not have instance protection enabled.
247/n
However, instance protection won't protect Spot instance termination triggered due to market price exceeding bid price.
248/n
The AWS Trusted Advisor provides best practices (or checks) in four categories: Cost Optimization, Security, Fault tolerance, Performance improvement.
249/n
Using Amazon CloudWatch alarm actions, you can create alarms that automatically stop, terminate, reboot, or recover your Amazon Elastic Compute Cloud (Amazon EC2) instances.
250/n
You can use the stop or terminate actions to help you save money when you no longer need an instance to be running.
251/n
Since AWS is a public cloud any application hosted on EC2 is prone to hacker attacks.
252/n
It becomes extremely important for a user to setup a proper security mechanism on the EC2 instances.
253/n
A few of the security measures are listed below:
254/n
Always keep the OS updated with the latest patch
255/n
Always create separate users with in OS if they need to connect with the EC2 instances, create their keys and disable their password
256/n
Create a procedure using which the admin can revoke the access of the user when the business work on the EC2 instance is completed
257/n
Lock down unnecessary ports
258/n
Audit any proprietary applications that the user may be running on the EC2 instance
259/n
Provide temporary escalated privileges, such as sudo for users who need to perform occasional privileged tasks
260/n
A recommended best practice is to scale out quickly and scale in slowly so you can respond to bursts or spikes but avoid inadvertently terminating Amazon EC2 instances too quickly, only having to launch more Amazon EC2 instances if the burst is sustained.
261/n
IAM is not an identity store/ authorization system 
262/n
if you are working with a mobile app, consider Amazon Cognito for identity management for mobile applications.
263/n
A principal is an IAM entity that is allowed to interact with AWS resources. A principal can be permanent or temporary, and it can represent a human or an application.
264/n
There are three types of principals: root users, IAM users, and roles / temporary security tokens.
265/n
Roles are used to grant specific privileges to specific actors for a set duration of time.
266/n
When one of these actors assumes a role, AWS provides the actor with a temporary security token from the AWS Security Token Service (STS) that the actor can use to access AWS Cloud services.
267/n
The range of a temporary security token lifetime is 15 minutes to 36 hours.
268/n
Amazon EC2 RolesGranting permissions to applications running on an Amazon EC2 instance.Cross-Account Access— Granting permissions to users from other AWS accounts, whether you control those accounts or not.
269/n
Federation— Granting permissions to users authenticated by a trusted external system.
270/n
IAM can integrate with two different types of outside Identity Providers: web (e.g. facebook) & internal/enterprise (e.g. active directory)
271/n
A policy is a JSON document that fully defines a set of permissions to access and manipulate AWS resources.
272/n
The security risk of any credential increases with the age of the credential. To this end, it is a security best practice to rotate access keys associated with your IAM users. IAM facilitates this process by allowing two active access keys at a time.
273/n
If an AssumeRole call includes a role and a policy, the policy cannot expand the privileges of the role.
274/n
Common use cases for IAM roles include federating identities from external IdPs, assigning privileges to an Amazon EC2 instance where they can be assumed by applications running on the instance, and cross-account access.
275/n
The three principals that can authenticate and interact with AWS resources are the root user, IAM users, and roles.
276/n
Amazon RDS does not provide shell access to Database (DB) Instances, and it restricts access to certain system procedures and tables that require advanced privileges.
277/n
Existing DB Instances can be changed or resized using the ModifyDBInstance API.
278/n
A DB parameter group acts as a container for engine configuration values that can be applied to one or more DB Instances.
279/n
A DB option group acts as a container for engine features, which is empty by default.
280/n
Amazon RDS MySQL supports Multi-AZ deployments for high availability and read replicas for horizontal scaling.
281/n
AWS offers two licensing models: License Included and Bring Your Own License (BYOL).
282/n
When you first create an Amazon Aurora instance, you create a DB cluster.
283/n
A DB cluster has one or more instances and includes a cluster volume that manages the data for those instances. Each DB cluster can have up to 15 Amazon Aurora Replicas in addition to the primary instance.
284/n
Amazon RDS supports three storage types: Magnetic, General Purpose (Solid State Drive [SSD]), and Provisioned IOPS (SSD).
285/n
RPO is defined as the maximum period of data loss that is acceptable 
286/n
RTO is defined as the maximum amount of downtime that is permitted to recover from backup and to resume processing.
287/n
When you delete a DB Instance, all automated backup snapshots are deleted and cannot be recovered. Manual snapshots, however, are not deleted.
288/n
Automated backups are kept for a configurable number of days, called the backup retention period.
289/n
You cannot restore from a DB snapshot to an existing DB Instance; a new DB Instance is created when you restore.
290/n
Multi-AZ deployments are available for all types of Amazon RDS database engines.
291/n
Amazon RDS automatically replicates the data from the master database or primary instance to the slave database or secondary instance using synchronous replication.
292/n
Amazon RDS will automatically fail over to the standby instance without user intervention. The DNS name remains the same, but the Amazon RDS service changes the CNAME to point to the standby.
293/n
Failover between the primary and the secondary instance is fast, and the time automatic failover takes to complete is typically one to two minutes.
294/n
Each database instance can scale from 5GB up to 6TB in provisioned storage depending on the storage type and engine.
295/n
Read replicas are currently supported in Amazon RDS for MySQL, PostgreSQL, MariaDB, and Amazon Aurora.
296/n
Updates made to the source DB Instance are asynchronously copied to the read replica. You can reduce the load on your source DB Instance by routing read queries from your applications to the read replica.
297/n
Before you can deploy into an Amazon VPC, you must first create a DB subnet group that predefines which subnets are available for Amazon RDS deployments.
298/n
The key component of an Amazon Redshift data warehouse is a cluster.
299/n
A cluster is composed of a leader node and one or more compute nodes. 
300/n
The Dense Compute node types support clusters up to 326TB using fast SSDs, while the Dense Storage nodes support clusters up to 2PB using large magnetic disks. The number of slices per node depends on the node size of the cluster and typically varies between 2 and 16.
301/n
Whenever you perform a resize operation, Amazon Redshift will create a new cluster and migrate data from the old cluster to the new one. During a resize operation, the database will become read-only until the operation is finished.
302/n
The data distribution style that you select for your database has a big impact on query performance, storage requirements, data loading, and maintenance. When creating a table, you can choose between one of three distribution styles: EVEN, KEY, or ALL.
303/n
EVEN distribution is the default option and results in the data being distributed across the slices in a uniform fashion regardless of the data.
304/n
KEY distribution: With KEY distribution, the rows are distributed according to the values in one column. The leader node will store matching values close together and increase query performance for joins.
305/n
With ALL, a full copy of the entire table is distributed to every node. This is useful for lookup tables and other large tables that are not updated frequently.
306/n
The sort keys for a table can be either compound or interleaved.
307/n
A compound sort key is more efficient when query predicates use a prefix, which is a subset of the sort key columns in order.
308/n
An interleaved sort key gives equal weight to each column in the sort key, so query predicates can use any subset of the columns that make up the sort key, in any order.
309/n
Amazon Redshift supports standard SQL commands like INSERT and UPDATE to create and modify records in a table.
310/n
A COPY command can load data into a table in the most efficient manner, and it supports multiple types of input data sources.
311/n
Data can also be exported out of Amazon Redshift using the UNLOAD command. This command can be used to generate delimited text files and store them in Amazon S3.
312/n
For large Amazon Redshift clusters supporting many users, you can configure Workload Management (WLM) to queue and prioritize queries. WLM allows you define multiple queues and set the concurrency level for each queue.
313/n
Amazon DynamoDB supports two types of primary keys, and this configuration cannot be changed after a table has been created:
314/n
It is possible for two items to have the same partition key value, but those two items must have different sort key values.
315/n
You can create or delete a global secondary index on a table at any time. #DynamoDB
316/n
You can only create a local secondary index when you create a table. #DynamoDB
317/n
Amazon DynamoDB Streams makes it easy to get a list of item modifications for the last 24-hour period. 
318/n
Stream records are organized into groups, also referred to as shards.Shards live for a maximum of 24 hours and, with fluctuating load levels, could be split one or more times before they are eventually closed.
319/n
To build an application that reads from a shard, it is recommended to use the Amazon DynamoDB Streams Kinesis Adapter.Amazon SQS ensures delivery of each message at least once and supports multiple readers and writers interacting with the same queue.
320/n
Although most of the time each message will be delivered to your application exactly once, you should design your system to be idempotent (that is, it must not be adversely affected if it processes the same message more than once).
321/n
Delay queues allow you to postpone the delivery of new messages in a queue for a specific number of seconds.
322/n
To create a delay queue, use CreateQueue and set the DelaySeconds attribute to any value between 0 and 900 (15 minutes). The default value for DelaySeconds is 0.
323/n
When a message is in the queue but is neither delayed nor in a visibility timeout, it is considered to be “in flight.”
324/n
You can have up to 120,000 messages in flight at any given time. 
325/n
Amazon SQS supports up to 12 hours’ maximum visibility timeout.
326/n
Amazon SQS uses three identifiers that you need to be familiar with: queue URLs, message IDs, and receipt handles.
327/n
To delete a message, you need the message’s receipt handle instead of the message ID.
328/n
The maximum length of a message ID is 100 characters. The maximum length of a receipt handle is 1,024 characters. Each message can have up to 10 attributes.
329/n
If there is no message in the queue, then the call will wait up to WaitTimeSeconds for a message to appear before returning.
330/n
Long polling drastically reduces the amount of load on your client.
331/n
Dead letter queue is a queue that other (source) queues can target to send messages that for some reason could not be successfully processed
332/n
You can create a dead letter queue from the Amazon SQS API and the Amazon SQS console.
333/n
Amazon SQS Access Control allows you to assign policies to queues that grant specific interactions to other accounts without that account having to assume IAM roles from your account.
334/n
Amazon SQS does not return success to a SendMessage API call until the message is durably stored in Amazon SQS.
335/n
In Amazon SWF, a task represents a logical unit of work that is performed by a component of your application.
336/n
When using Amazon SWF, you implement workers to perform tasks. These workers can run either on cloud infrastructure, such as Amazon EC2, or on your own premises.
337/n
Using Amazon SWF, you can implement distributed, asynchronous applications as workflows.
338/n
Workflows coordinate and manage the execution of activities that can be run asynchronously across multiple computing devices and that can feature both sequential and parallel processing.
339/n
Domains provide a way of scoping Amazon SWF resources within your AWS account. You must specify a domain for all the components of a workflow,It is possible to have more than one workflow in a domain;
340/n
Workflows in different domains cannot interact with one another.
341/n
Amazon SWF consists of a number of different types of programmatic features known as actors.
342/n
actors communicate with Amazon SWF through its API.
343/n
A workflow starter is any application that can initiate workflow executions. An activity worker is a single computer process (or thread) that performs the activity tasks in your workflow. The logic that coordinates the tasks in a workflow is called the decider.
344/n
Amazon SWF provides activity workers and deciders with work assignments, given as one of three types of tasks: activity tasks, AWS Lambda tasks, and decision tasks.
345/n
An AWS Lambda task is similar to an activity task, but executes an AWS Lambda function instead of a traditional Amazon SWF activity.
346/n
The decision task contains the current workflow history.
347/n
Amazon SWF schedules a decision task when the workflow starts and whenever the state of the workflow changes, such as when an activity task completes.
348/n
Scheduling a task creates the task list if it doesn’t already exist.
349/n
Deciders and activity workers communicate with Amazon SWF using long polling.
350/n
Registered workflow type is identified by its domain, name, and version.
351/n
Workflow types are specified in the call to RegisterWorkflowType.
352/n
Activity types are specified in the call to RegisterActivityType.
353/n
Amazon SNS is a web service for mobile and enterprise messaging that enables you to set up, operate, and send notifications.
354/n
Amazon SNS follows the publish-subscribe (pub-sub) messaging paradigm,
355/n
A fanout scenario is when an Amazon SNS message is sent to a topic and then replicated and pushed to multiple Amazon SQS queues, HTTP endpoints, or email addresses (see Figure 8.5). This allows for parallel asynchronous processing.
356/n
Push email and text messaging are two ways to transmit messages to individuals or groups via email and/ or SMS.
357/n
Visibility timeout is a period of time during which Amazon SQS prevents other components from receiving and processing a message because another component is already processing it.
358/n
By default, the message visibility timeout is set to 30 seconds, and the maximum that it can be is 12 hours.
359/n
Long polling allows your Amazon SQS client to poll an Amazon SQS queue. If nothing is there, ReceiveMessage waits between 1 and 20 seconds.
360/n
You can use the following protocols with Amazon SNS: HTTP, HTTPS, SMS, email, email-JSON, Amazon SQS, and AWS Lambda.
361/n
Amazon Route 53 is an authoritative DNS system. An authoritative DNS system provides an update mechanism that developers use to manage their public DNS names.
362/n
Name servers can be authoritative, meaning that they give answers to queries about domains under their control.
363/n
A zone file is a simple text file that contains the mappings between domain names and IP addresses. This is how a DNS server finally identifies which IP address should be contacted when a user requests a certain domain name.
364/n
A Start of Authority (SOA) record is mandatory in all zone files,
365/n
The A record is used to map a host to an IPv4 IP address, while AAAA records are used to map a host to an IPv6 address.
366/n
The MX record should point to a host defined by an A or AAAA record and not one defined by a CNAME.
367/n
Pointer (PTR) record is essentially the reverse of an A record. PTR records map an IP address to a DNS name, and they are mainly used to check if the server name is associated with the IP address from where the connection was initiated.
368/n
Use an alias record, not a CNAME, for your hosted zone. CNAMEs are not allowed for hosted zones in Amazon Route 53.
369/n
Routing policy options are simple, weighted, latency-based, failover, and geolocation.
370/n
Note that you can’t create failover resource record sets for private hosted zones.
371/n
Geolocation routing: If you don’t create a default resource record set, Amazon Route 53 returns a “no answer” response for queries from those locations.
372/n
You cannot create two geolocation resource record sets that specify the same geographic location.
373/n
Memcached is a simple-to-use in-memory key/ value store that can be used to store arbitrary types of data.
374/n
Redis is a flexible in-memory data structure store that can be used as a cache, database, or even as a message broker.
375/n
Redis clusters  can support up to five read replicas to offload read requests.
376/n
Some of the key actions an administrator can perform include CreateCacheCluster, ModifyCacheCluster, or DeleteCacheCluster. Redis clusters also support CreateReplicationGroup and CreateSnapshot actions, among others.
377/n
Use Memcached when you need a simple, in-memory object store that can be easily partitioned and scaled horizontally.
378/n
Use Redis when you need to back up and restore your data, need many clones or read replicas, or are looking for advanced functionality like sort and rank or leaderboards that Redis natively supports.
379/n
Amazon CloudFront is optimized to work with other AWS cloud services as the origin server, including Amazon S3 buckets, Amazon S3 static websites, Amazon Elastic Compute Cloud (Amazon EC2), and Elastic Load Balancing.
380/n
By default, objects expire from CloudFront cache after 24 hours.
381/n
Cache behaviors are applied in order; if a request does not match the first path pattern, it drops down to the next path pattern. Normally the last path pattern specified is * to match all files.
382/n
Signed URLs: Use URLs that are valid only between certain times and optionally from certain IP addresses.
383/n
Signed Cookies Require authentication via public and private key pairs.
384/n
Origin Access Identities (OAI) Restrict access to an Amazon S3 bucket only to a special Amazon CloudFront user associated with your distribution. This is the easiest way to ensure that content in a bucket is only accessed by Amazon CloudFront.
385/n
All or Most Requests Come From a Single Location and/or via VPN  - don’t use CloudFront
386/n
AWS Storage Gateway is a service connecting an on-premises software appliance with cloud-based storage to provide seamless and secure integration between an organization’s on-premises IT environment and AWS storage infrastructure.
387/n
Gateway-Cached volumes allow you to expand your local storage capacity into Amazon S3. While each volume is limited to a maximum size of 32TB, a single gateway can support up to 32 volumes for a maximum storage of 1 PB. 
388/n
Gateway-Stored volumes allow you to store your data on your on-premises storage and asynchronously back up that data to Amazon S3. (512TB max - 16TB X 32 volumes)
389/n
When your tape software ejects a tape, it is archived on a Virtual Tape Shelf (VTS) and stored in Amazon Glacier.
390/n
You’re allowed 1 VTS per AWS region, but multiple gateways in the same region can share a VTS.
391/n
Simple AD is a Microsoft Active Directory-compatible directory from AWS Directory Service that is powered by Samba 4. Note that you cannot set up trust relationships between Simple AD and other Active Directory domains.
392/n
AD Connector is a proxy service for connecting your on-premises Microsoft Active Directory to the AWS cloud without requiring complex directory synchronization or the cost and complexity of hosting a federation infrastructure. 
393/n
You can also use AD Connector to enable MFA by integrating it with your existing Remote Authentication Dial-Up Service (RADIUS)-based MFA infrastructure to provide an additional layer of security when users access AWS applications.
394/n
M{0}nbsp;AD Directory Service is your best choice if you have more than 5,000 users and need a trust relationship set up between an AWS-hosted directory and your on-premises directories.
395/n
In most cases, Simple AD is the least expensive option and your best choice if you have 5,000 or fewer users and don’t need the more advanced Microsoft Active Directory features.
396/n
CMKs can never leave AWS KMS unencrypted, but data keys can leave the service unencrypted.
397/n
All AWS KMS cryptographic operations accept an optional key / value map of additional contextual information called an encryption context.
398/n
The specified context must be the same for both the encrypt and decrypt operations or decryption will not succeed.
399/n
Symmetric encryption algorithms require that the same key be used for both encrypting and decrypting the data.
400/n
When you create a trail that applies to all AWS regions, AWS CloudTrail creates the same trail in each region
401/n
Amazon Kinesis Firehose receives stream data and stores it in Amazon S3, Amazon Redshift, or Amazon Elasticsearch.
402/n
Amazon Kinesis Streams enable you to collect and process large streams of data records in real time.
403/n
When a cluster is shut down, instance storage is lost and the data does not persist.
404/n
HDFS can also make use of Amazon EBS storage, trading in the cost effectiveness of instance storage for the ability to shut down a cluster without losing data.
405/n
AWS Data Pipeline is best for regular batch processes 
406/n
Use Amazon Kinesis for data streams.
407/n
AWS Snowball uses Amazon-provided shippable storage appliances shipped through UPS.
408/n
The AWS Snowball is its own shipping container, and the shipping label is an E Ink display that automatically shows the correct address when the AWS Snowball is ready to ship. You can drop it off with UPS, no box required.
409/n
AWS Import/ Export Disk transfers data directly onto and off of storage devices you own using the Amazon high-speed internal network.
410/n
AWS Import/ Export Disk has an upper limit of 16TB.
411/n
AWS OpsWorks is a configuration management service that helps you configure and operate applications using Chef.
412/n
The stack is the core AWS OpsWorks component. It is basically a container for AWS resources— Amazon EC2 instances, Amazon RDS database instances, and so on— that have a common purpose and make sense to be logically managed together.
413/n
You can use AWS OpsWorks or IAM to manage user permissions. Note that the two options are not mutually exclusive; it is sometimes desirable to use both.
414/n
You define the elements of a stack by adding one or more layers. A layer represents a set of resources that serve a particular purpose, such as load balancing, web applications, or hosting a database server.
415/n
Layers depend on Chef recipes to handle tasks such as installing packages on instances, deploying applications, and running scripts.
416/n
When you deploy an app, AWS OpsWorks triggers a Deploy event, which runs the Deploy recipes on the stack’s instances.
417/n
AWS CloudFormation is a service that helps you model and set up your AWS resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS.
418/n
A template is a text file whose format complies with the JSON standard. AWS CloudFormation uses these templates as blueprints for building your AWS resources.
419/n
If stack creation fails, AWS CloudFormation rolls back your changes by deleting the resources that it created.
420/n
You can use template parameters to tune the settings and thresholds in each region separately and still be sure that the application is deployed consistently across the regions.
421/n
To update a stack, create a change set by submitting a modified version of the original stack template, different input parameter values, or both.
422/n
AWS CloudFormation compares the modified template with the original template and generates a change set. The change set lists the proposed changes. After reviewing the changes, you can execute the change set to update your stack.
423/n
If you want to delete a stack but still retain some resources in that stack, you can use a deletion policy to retain those resources. If a resource has no deletion policy, AWS CloudFormation deletes the resource by default.
424/n
AWS Elastic Beanstalk is the fastest and simplest way to get an application up and running on AWS.
425/n
Developers can simply upload their application code, and the service automatically handles all of the details, such as resource provisioning, load balancing, Auto Scaling, and monitoring.
426/n
An application version refers to a specific, labeled iteration of deployable code for a web application.
427/n
An environment is an application version that is deployed onto AWS resources.
428/n
Each environment runs only a single application version at a time;
429/n
An environment configuration identifies a collection of parameters and settings that define how an environment and its associated resources behave.
430/n
When an environment’s configuration settings are updated, AWS Elastic Beanstalk automatically applies the changes to existing resources or deletes and deploys new resources depending on the type of change.
431/n
When an AWS Elastic Beanstalk environment is launched, the environment tier, platform, and environment type are specified.
432/n
An environment tier whose web application processes web requests is known as a web server tier.
433/n
An environment tier whose application runs background jobs is known as a worker tier.AWS Trusted Advisor draws upon best practices learned from the aggregated operational history of serving over a million AWS customers.
434/n
AWS Config is a fully managed service that provides you with an AWS resource inventory, configuration history, and configuration change notifications to enable security and governance.
435/n
AWS Config will generate configuration items when the configuration of a resource changes, and it maintains historical records of the configuration items of your resources from the time you start the configuration recorder.
436/n
AWS has strategically placed a limited number of access points to the cloud to allow for a more comprehensive monitoring of inbound and outbound communications and network traffic.
437/n
It is not possible for a virtual instance running in promiscuous mode to receive or “sniff” traffic that is intended for a different virtual instance.
438/n
Attacks such as Address Resolution Protocol (ARP) cache poisoning do not work within Amazon EC2 and Amazon VPC.
439/n
The AWS IAM API enables you to rotate the access keys of your AWS account and also for IAM user accounts.
440/n
AWS passwords can be up to 128 characters long and contain special characters, giving you the ability to create very strong passwords.
441/n
Not only does the signing process help protect message integrity by preventing tampering with the request while it is in transit, but it also helps protect against potential replay attacks.
442/n
A request must reach AWS within 15 minutes of the timestamp in the request. Otherwise, AWS denies the request.
443/n
Version 4 provides an additional measure of protection over previous versions by requiring that you sign the message using a key that is derived from your SAK instead of using the SAK itself.
444/n
When you create an IAM role using the AWS Management Console, the console creates an instance profile automatically and gives it the same name as the role to which it corresponds.
445/n
When you use the AWS CLI, API, or an AWS SDK to create a role, you create the role and instance profile as separate actions, and you might give them different names.
446/n
To launch an instance with an IAM role, you specify the name of its instance profile.
447/n
Amazon CloudFront key pairs can be created only by the root account and cannot be created by IAM users.
448/n
For IAM users, you must create the X. 509 certificate (signing certificate) by using third-party software.
449/n
You will also need an X. 509 certificate to create a customized Linux AMI for Amazon EC2 instances. 
450/n
The certificate is only required to create an instance-backed AMI (as opposed to an Amazon Elastic Block Store)
451/n
CloudTrail File Integrity: This feature is built using industry standard algorithms: SHA-256 for hashing and SHA-256 with RSA for digital signing.
452/n
This makes it computationally unfeasible to modify, delete, or forge AWS CloudTrail log files without detection.
453/n
Amazon EC2 currently uses a highly customized version of the Xen hypervisor, taking advantage of paravirtualization (in the case of Linux guests).
454/n
Host Operating System Administrators with a business need to access the management plane are required to use MFA to gain access to purpose-built administration hosts.
455/n
Amazon EC2 provides a mandatory inbound firewall that is configured in a default deny-all mode; Amazon EC2 customers must explicitly open the ports needed to allow inbound traffic.
456/n
Amazon EBS replication is stored within the same Availability Zone, not across multiple zones; therefore, it is highly recommended that you conduct regular snapshots to Amazon S3 for long-term data durability.
457/n
it is recommended that RDS backups to Amazon S3 be performed through the database management system so that distributed transactions and logs can be checkpointed.
458/n
SG: The default group enables inbound communication from other members of the same group and outbound communication to any destination.
459/n
With ACLs, you can only grant other AWS accounts (not specific users) access to your Amazon S3 resources.
460/n
With bucket policies, you can grant users within your AWS account or other AWS accounts access to your Amazon S3 resources.
461/n
Amazon Glacier stores files as archives within vaults.
462/n
You can store an unlimited number of archives in a single vault and can create up to 1,000 vaults per region. Each archive can contain up to 40 TB of data.
463/n
All network traffic entering or exiting your Amazon VPC via your IPsec VPN connection can be inspected by your on-premises security infrastructure, including network firewalls and intrusion detection systems.
464/n
If you require your MySQL data to be encrypted while at rest in the database, your application must manage the encryption and decryption of data.
465/n
When an Amazon RDS DB Instance deletion API (DeleteDBInstance) is run, the DB Instance is marked for deletion.
466/n
To increase performance, Amazon Redshift uses techniques such as columnar storage, data compression, and zone maps to reduce the amount of I/ O needed to perform queries.
467/n
It also has a Massively Parallel Processing (MPP) architecture, parallelizing and distributing SQL operations to take advantage of all available resources.
468/n
In Amazon Redshift, you grant database user permissions on a per-cluster basis instead of on a per-table basis. However, users can see data only in the table rows that were generated by their own activities; rows generated by other users are not visible to them.
469/n
Amazon Redshift stores your snapshots for a user-defined period, which can be from 1 to 35 days.
470/n
Amazon Redshift uses a four-tier, key-based architecture for encryption. These keys consist of data encryption keys, a database key, a cluster key, and a master key.
471/n
Forward Secrecy uses session keys that are ephemeral and not stored anywhere, which prevents the decoding of captured data by unauthorized third parties, even if the secret long-term key itself is compromised. 
472/n
Using the Amazon ElastiCache service, you create a Cache Cluster, which is a collection of one or more Cache Nodes, each running an instance of the Memcached service.
473/n
To allow network access to your Cache Cluster, create a Cache Security Group and use the Authorize Cache Security Group Ingress API or CLI command to authorize the desired Amazon EC2 security group (which in turn specifies the Amazon EC2 instances allowed).
474/n
IP-range based access control is currently not enabled for Cache Clusters.
475/n
All clients to a Cache Cluster must be within the Amazon EC2 network, and authorized via Cache Security Groups.
476/n
When launching job flows on your behalf, Amazon EMR sets up two Amazon EC2 security groups: one for the master nodes and another for the slaves.
477/n
Amazon Kinesis is a managed service designed to handle real-time streaming of big data.
478/n
Federated users are users (or applications) who do not have AWS accounts. With roles, you can give them access to your AWS resources for a limited amount of time.
479/n
To begin using Amazon Cognito, you create an identity pool through the Amazon Cognito console. The identity pool is a store of user identity information that is specific to your AWS account.
480/n
By default, Amazon Cognito creates a new role with limited permissions; end users only have access to the Amazon Cognito Sync service and Amazon Mobile Analytics.
481/n
Data stored in Amazon EBS volumes is redundantly stored in multiple physical locations within the same Availability Zone as part of normal operation of that service and at no additional charge.
482/n
Tags The maximum number of tags per resource is 50
483/n
The maximum key length is 127 Unicode characters.
484/n
The maximum value length that can be used is 255 Unicode characters
485/n
The tag keys and values are case sensitive
486/n
Allowed characters are letters, spaces, and numbers representable in UTF-8, plus the following special characters: + - =. _ : /
487/n
Do not use leading or trailing spaces
488/n
Do not use the aws: prefix in your tag names or values because it is reserved for AWS use
489/n
A single Amazon SQS message queue can contain an unlimited number of messages. However, there is a 120,000 limit for the number of inflight messages for a standard queue and 20,000 for a FIFO queue.
490/n
AWS does not allow creating more than 5000 IAM users.
491/n
Identity federation enables users from an existing directory to access resources within your AWS account, making it easier to manage your users by maintaining their identities in a single place.
492/n
If the user wants to temporarily stop the access to S3 the best solution is to disable the keys. 
493/n
Provisioned IOPS uses optimized EBS volumes and an optimized configuration stack. It provides additional, dedicated capacity for the EBS I/O.
494/n
A user can always create a new EBS volume of a higher size than the original snapshot size. The user cannot create a volume of a lower size.
495/n
When the new volume is created the size in the instance will be shown as the original size. The user needs to change the size of the device with resize2fs or other OS specific commands.
496/n
The xvd[f-p] is the recommended device name for EBS volumes that can be attached to the Amazon EC2 Instances running on Windows. The xvd[f-z] device name is recommended as well.
497/n
The EBS snapshots are a point in time backup of the EBS volume. It is an incremental snapshot, but is always specific to the region and never specific to a single AZ.
498/n
An EBS volume provides persistent data storage. The user can attach a volume to any instance provided they are both in the same AZ. Even if they are in the same region but in a different AZ, it will not be able to attach the volume to that instance.
499/n
AWS Glacier has four resources. Vault and Archives are core data model concepts. Job is required to initiate download of archive. The notification configuration is required to send user notification when archive is available for download.
500/n
For Paravirtual virtualization type /dev/sda1 is the reserved name and for HVM virtualization /dev/sda1 or /dev/xvda are the names of the device reserved for the root device for Linux instances.
501/n
S3 buckets can be in one of the three states: unversioned (the default), versioning-enabled or versioning-suspended. 
502/n
The maximum number of AWS CloudFormation stacks that you can create is 200 stacks.
503/n
Amazon S3 offers access policy options broadly categorized as resource-based policies and user policies.
504/n
Access policies, such as ACL and resource policy can be attached to the bucket.  With the object the user can only have ACL and not an object policy. The user can also attach access policies to the IAM users in the account. These are called user policies.
505/n
The following are valid CLI commands for EC2 instances:
506/n
ec2-accept-vpc-peering-connection, ec2-allocate-address, ec2-assign-private-ip-addresses, ec2-associate-address, ec2-associate-dhcp-options, ec2-associate-route-table, ec2-attach-internet-gateway, ec2-attach-network-interface
507/n
When creating an EBS the user cannot specify the subnet or VPC. However, the user must create the EBS in the same zone as the instance so that it can attach the EBS volume to the running instance. 
508/n
You can use AWS IAM to grant access to Amazon DynamoDB resources and API actions. To do this, you first write an AWS IAM policy, which is a document that explicitly lists the permissions you want to grant. You then attach that policy to an AWS IAM user or role.
509/n
When the user makes any changes to the RDS security group the rule status will be authorizing for some time until the changes are applied to all instances that the group is connected with. Once the changes are propagated the rule status will change to authorized.
510/n
Amazon EC2 supports two types of block devices.Instance store volumes (virtual devices whose underlying hardware is physically attached to the host computer for the instance)EBS volumes (remote storage devices)
511/n
The SSD, HDD and Magnetic choices are all options for the type of storage offered via EBS volumes. They are not types of block devices.
512/n
To host a static website, the user needs to configure an Amazon S3 bucket for website hosting and then upload the website contents to the bucket. The user can configure the index, error document as well as configure the conditional routing of on object name.  
513/n
Sticky session: The key to manage the sticky session is determining how long the load balancer should route the user's request to the same application instance.
514/n
If the application has its own session cookie, then the user can set the Elastic Load Balancing to create the session cookie to follow the duration specified by the application's session cookie.
515/n
If the user’s application does not have its own session cookie, then he can set the Elastic Load Balancing to create a session cookie by specifying his own stickiness duration.
516/n
With regard to RDS, the user can manage the configuration of a DB engine by using a DB parameter group. A DB parameter group contains engine configuration values that can be applied to one or more DB instances of the same instance type.
517/n
The X-Forwarded-Port request header helps the user identify the port used by the client while sending a request to ELB.
518/n
For AWS Linux, the ec2-net-utils package can configure additional network interfaces that the user can attach while the instance is running, refreshes secondary IP addresses during DHCP lease renewal, and updates the related routing rules. 
519/n
In DynamoDB, you can increase the throughput you have provisioned for your table using UpdateTable API or in the AWS Management Console.
520/n
If you wish to exceed throughput rates of 10,000 write capacity units or 10,000 read capacity units, you must first contact AWS. The exception is to this general rule is the US East (N. Virginia) Region, which allows 40,000 read and write capacity units per table.
521/n
Simple Workflow service: Before designing a workflow or any activity, you must register at least one domain. Then, when designing an Amazon SWF workflow, you precisely define each of the required activities.
522/n
You then register each activity with Amazon SWF as an activity type. When you register the activity, you provide information such as a name and version, and some timeout values based on how long you expect the activity to take. 
523/n
AWS Elastic Beanstalk is best suited for those groups who want to deploy and manage their applications within minutes in the AWS cloud.
524/n
As a bonus, you don’t even need experience with cloud computing to get started. The current version of AWS Elastic Beanstalk uses the Amazon Linux AMI or the Windows Server 2012 R2 AMI.
525/n
Amazon RDS provides two different methods for backing up and restoring the Amazon DB instances. A brief I/O performance degradation, typically lasting a few seconds, occurs during both automated backups and DB snapshot operations on Single-AZ DB instances.
526/n
The instances that reside in the private subnets of your VPC are not reachable from the Internet, meaning that is not possible to SSH into them. To interact with them you can use a bastion server, located in a public subnet, that will act as a proxy for them.
527/n
You can also connect if you have Direct Connect or VPN.
528/n
If an EBS volume stays in the detaching state, the user can force the detachment by clicking Force Detach.
529/n
Forcing the detachment can lead to either data loss or a corrupted file system. The user should use this option only as a last resort to detach a volume from a failed instance or if he is detaching a volume with the intention of deleting it.
530/n
Queue names are limited to 80 characters. Alphanumeric characters plus hyphens (-) and underscores (_) are allowed. Queue names must be unique within an AWS account. After you delete a queue, you can reuse the queue name.
531/n
The statement is the main element of the IAM policy and it is a must for a policy. Elements such as condition, version and ID are not required.
532/n
When you use the AWS Elastic Beanstalk console to deploy a new application or an application version, you’ll need to upload a source bundle.
533/n
Your source bundle must meet the following requirements:
534/n
Consist of a single .zip file or .war file
535/n
Not exceed 512 MB
536/n
Not include a parent folder or top-level directory (subdirectories are fine)
537/n
Elastic Load Balancing provides Secure Sockets Layer (SSL) negotiation configurations, known as security policy, to negotiate connections between the clients and the load balancer.
538/n
When you use HTTPS/SSL for your front-end connections, you can use either a predefined security policy or a custom security policy.
539/n
Elastic Load Balancing supports both the Internet Protocol version 6 (IPv6) and the Internet Protocol version 4 (IPv4).
540/n
IPv6 support is currently not available for the load balancers in all the regions or in Amazon VPC. Communication between the load balancer and its back-end instances uses only IPv4.
541/n
ELB: Each listener can have one or more rules for routing requests.
542/n
When you create a listener, you must create a default rule for that listener. When creating additional rules for your listener, there can only be one condition and one action.
543/n
When the specified condition is met, the specified action is taken. If no conditions for any of the additional listener rules are met, then the default action for the default rule is carried out.
544/n
Elastic Load Balancing supports the following versions of the SSL protocol:TLS 1.2TLS 1.1TLS 1.0SSL 3.0SSL 2.0CloudTrail:
545/n
By default, the log files delivered by CloudTrail to your bucket are encrypted by Amazon server-side encryption with Amazon S3-managed encryption keys (SSE-S3).
546/n
To provide a security layer that is directly manageable, you can instead use server-side encryption with AWS KMS–managed keys (SSE-KMS) for your CloudTrail log files.
547/n
AWS Config can: Enforce rules that checks the compliancy of your resource against specific controls:
548/n
Predefined and custom rules can be configured within AWS Config allowing you to check resources compliance against these rules
549/n
A trail that applies to all regions exists in each region and is counted as one trail in each region.  
550/n
aws-cli cloudformationdescribe-stack-events - Returns all stack-related events for a specified stack.
551/n
You can assign tags only to resources that already exist. You can’t terminate, stop, or delete a resource based solely on its tags; you must specify the resource identifier. #EC2
552/n
AWS Cloudwatch can be accessed from the Amazon CloudWatch Console, CloudWatch API, AWS CLI and AWS SDKs. #Cloudwatch
553/n
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to ramnishkalsi
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!