AWS

http://jayendrapatil.com/category/admin/

EC2 Introduction :
1. EC2 Virtual Servers also known as instances.
2. Instance types : It is decided based on CPU, memory, storage, and networking capacity of instances.
3. If you want to login EC2 isntances remotely, you have to use key pairs (AWS stores the public key, and you store the private key in a secure place).
4. Instance Store(Temporary) : It is a temporary data store in EC2, that's deleted when you stop or terminate your instance.
5. Persistent storage : Storage volumes for your data using Amazon Elastic Block Store (Amazon EBS), known as Amazon EBS volumes
8. Cloudwatch : To monitor basic statistics for your instances and Amazon EBS volumes, you can use Amazon CloudWatch. 
9. CloudTrail : To monitor the calls made to the Amazon EC2 API for your account, including calls made by the AWS Management Console, command line tools, and other services, use AWS CloudTrail.
10. Although you can set up or install your own database on an EC2 instance e.g. Oralce. Use Amazon RDS which offers the advantage of handling your database management tasks, such as patching the software, backing up, and storing the backups.
11. On-Demand instances : Pay for the instances that you use by the hour, with no long-term commitments or up-front payments.
12. Reserved Instances : Make a low, one-time, up-front payment for an instance, reserve it for a one or three-year term, and pay a significantly lower hourly rate for these instances.
13. Spot instances (Instance Bidding e.g. Stock Bid) : Specify the maximum hourly price that you are willing to pay to run a particular instance type. The Spot price fluctuates based on supply and demand, but you never pay more than the maximum price you specified. If the Spot price moves higher than your maximum price, Amazon EC2 shuts down your Spot instances.

AMI : Amzon Machine Image
1. An AMI includes the following:
- A template for the root volume for the instance (for example, an operating system, an application server, and applications)
- It also contains the Launch permissions that control which AWS accounts can use the AMI to launch instances (AMI may or may not be used by everybody). So only people who have permissions can use your AMI. You can have public AMI as well (Use for anybody).
- A block device mapping that specifies the volumes to attach to the instance when it's launched.
2. AMI Across regions : You can copy an AMI to the same region or to different regions. 
3. When you are finished launching an instance from an AMI, you can deregister the AMI.
4. Custom AMI : You can customize the instance that you launch from a public AMI and then save that configuration as a custom AMI for your own use. 
5. Instance Type and AMI : You can launch different types of instances from a single AMI.
6. You can use sudo to run commands that require root privileges.
7. The root device for your instance contains the image used to boot the instance. 
7A: EC2 instances support two types for block level storage
- Elastic Block Store (EBS) : It is, somewhat like network attached storaga.
- Instance Store (It is physically attached to EC2 instance)
7B. EC2 Instances can be launched using either Elastic Block Store (EBS) or Instance Store volume as root volumes and additional volumes.
7C. EC2 instances can be launched by choosing between AMIs backed by Amazon EC2 instance store and AMIs backed by Amazon EBS. However, AWS recommends use of AMIs backed by Amazon EBS, because they launch faster and use persistent storage.
7D. Instance Store : Also known as Ephemeral storage. 
7E. Instance store volumes accesses storage from disks that are physically attached to the host computer.
7F. When an Instance stored instance is launched, the image that is used to boot the instance is copied to the root volume (typically sda1).
7G. Instance store provides temporary block-level storage for instances.
8. Your instance may include local storage volumes, known as instance store volumes, which you can configure at launch time with block device mapping.
8A. Key points for Instance store backed Instance
- Boot time is slower then EBS backed volumes and usually less then 5 min
- Can be selected as Root Volume and attached as additional volumes
- Instance store backed Instances can be of maximum 10GiB volume size
- Instance store volume can be attached as additional volumes only when is the Instance is being launched and cannot be attached once the Instance is up and running
- Instance store backed Instances cannot be stopped as one of the main reason being when stopped and started AWS does not guarantee the Instance would be launched in the same host.
- Data on Instance store volume is LOST in following scenarios :-
-- Failure of an underlying drive
-- Stopping an EBS-backed instance where instance store are additional volumes
-- Termination of the Instance
- Data on Instance store volume is NOT LOST when the instance is rebooted
- Instance store backed Instances cannot be upgraded
- When you launch an Amazon EC2 instance store-backed AMI, all the parts have to be retrieved from Amazon S3 before the instance is available.
9. An EBS volume behaves like a raw, unformatted, external block device that you can attach to a single instance and are not physically attached to the Instance host computer (more like a network attached storage).
9A. Key points for EBS backed Instance
- Boot time is very fast usually less then a min
- Can be selected as Root Volume and attached as additional volumes
- EBS backed Instances can be of maximum 16TiB volume size depending upon the OS
- EBS volume can be attached as additional volumes when the Instance is launched and even when the Instance is up and running
- Data on the EBS volume is LOST only if the Root Volume is EBS backed and the Delete On Termination flag is enabled (This is default behavior)
Data on EBS volume is NOT LOST in following scenarios :-
--Reboot on the Instance
--Stopping an EBS-backed instance
--Termination of the Instance for the additional EBS volumes. Additional EBS volumes are detached with their data intact
- When EBS-backed instance is in a stopped state, various instance and volume-related tasks can be done for e.g. you can modify the properties of the instance, you can change the size of your instance or update the kernel it is using, or you can attach your root volume to a different running instance for debugging or any other purpose
- EBS volumes are tied to a single AZ  in which they are created.
- EBS volumes are automatically replicated within that zone to prevent data loss due to failure of any single hardware component
- EBS backed Instances can be upgraded for instance type, Kernel, RAM disk and user data
- With an Amazon EBS-backed AMI, parts are lazily loaded and only the parts required to boot the instance need to be retrieved from the snapshot before the instance is available.
- However, the performance of an instance that uses an Amazon EBS volume for its root device is slower for a short time while the remaining parts are retrieved from the snapshot and loaded into the volume.
9. Review the rules in your security groups regularly, and ensure that you apply the principle of least privilege—only open up permissions that you require
10. Consider creating a bastion security group that allows external logins, and keep the remainder of your instances in a group that does not allow external logins.
11. Disable password-based logins for instances launched from your AMI. Passwords can be found or cracked, and are a security risk. 
12. When an instance is in a stopped state, you can attach or detach Amazon EBS volumes. You can also create an AMI from the instance, and you can change the kernel, RAM disk, and instance type.
13. Instances with Amazon EBS volumes for the root device default to stop, and instances with instance-store root devices are always terminated as the result of an instance shutdown.
14. All AMIs are categorized as either backed by Amazon EBS, which means that the root device for an instance launched from the AMI is an Amazon EBS volume, or backed by instance store, which means that the root device for an instance launched from the AMI is an instance store volume created from a template stored in Amazon S3.

Regions and Az : 
1. Resources aren't replicated across regions unless you do so specifically.
2. Each region is completely independent. 
3. Each Availability Zone is isolated, but the Availability Zones in a region are connected through low-latency links
4. Amazon EC2 resources are either global, tied to a region, or tied to an Availability Zone.
5. When you view your resources, you'll only see the resources tied to the region you've specified. This is because regions are isolated from each other, and we don't replicate resources across regions automatically.
6. When you launch an instance, you must select an AMI that's in the same region. If the AMI is in another region, you can copy the AMI to the region you're using.
7. All communication between regions is across the public Internet. Therefore, you should use the appropriate encryption methods to protect your data. Data transfer between regions is charged at the Internet data transfer rate for both the sending and the receiving instance. 
8. You can also use Elastic IP addresses to mask the failure of an instance in one Availability Zone by rapidly remapping the address to an instance in another Availability Zone.
9. An Availability Zone is represented by a region code followed by a letter identifier; for example, us-east-1a. 
10. Ensure that resources are distributed across the Availability Zones for a region, we independently map Availability Zones to identifiers for each account. For example, your Availability Zone us-east-1a might not be the same location as us-east-1a for another account. There's no way for you to coordinate Availability Zones between accounts.
11. As Availability Zones grow over time, our ability to expand them can become constrained. If this happens, we might restrict you from launching an instance in a constrained Availability Zone unless you already have an instance in that Availability Zone. Eventually, we might also remove the constrained Availability Zone from the list of Availability Zones for new customers. Therefore, your account might have a different number of available Availability Zones in a region than another account.
12. An AWS account provides multiple regions so that you can launch Amazon EC2 instances in locations that meet your requirements. For example, you might want to launch instances in Europe to be closer to your European customers or to meet legal requirements.
13. When you launch an instance, you can optionally specify an Availability Zone in the region that you are using.
15. If you need to, you can migrate an instance from one Availability Zone to another. For example, if you are trying to modify the instance type of your instance and we can't launch an instance of the new instance type in the current Availability Zone, you could migrate the instance to an Availability Zone where we can launch an instance of that instance type. The migration process involves creating an AMI from the original instance, launching an instance in the new Availability Zone, and updating the configuration of the new instance

Auto Scaling and ELB: 
1. When you have Auto Scaling enabled it will automatically increase the number of EC2 instances number of request to service increases, and decrease the number of EC2 instances when number of requests reduces.
2. If you have a higher load on your servers, use ELB service which will distribute the incoming web traffic also know as load automatically among all the running EC2 instances in Auto scaling group.
3. Load balancers also help us to monitor the incomming trafic. All the traffic comes via only ELB (single point of contact) for all incoming traffic to the instances in an Auto Scaling group.
5. You can attach more than one ELB to your auto scaling group. Once you attach an ELB to Auto scaling group it will automatically registers the instances from the group and balance the incoming traffic across the instances.
7. ELB uses IP address of the instances to register them and routes requests to the primary IP address of the primary interface (eth0) of the instance.
8. You can also detach the ELB from scaling group, once you detach it will be having Removing state till deregistering of the instances in the group.
9. If connection draining is enabled, ELB waits for in-flight requests to complete before deregistering the instances.
10. Impact of Deregistering ELB On Instances: They will remain in their state like running instance will remain in running state only.
11. Suppose you have suspended Load Balancer and few more instances are added to scaling group , during the load balancer in suspended state instances launched during the suspension period are not added to the load balancer, after resumption we have to manually register them again.
12. Once you create an auto scaling group, you can attach ELB to them and all the instaces running in scaling group will be automatically attached to ELB, you can have more than one ELB to group.
13. We can have auto scaling across AZs in single and same region. Similarly if you have attached an ELB to that auto scaling group, then it will also distribute the traffic across Azs.
14. Both the ELB and Scaling group separetly check the health status for each instances separately. Scaling group check instance healh using the instance status check. If instance is in failed status they will marked as unhealthy. ELB also perform the health status of instances using ping. However, Auto scaling is not depend on status check is done by ELB.
15. ELB does its own health check to make sure that traffic is routed to healthy instances.
16. Once an ELB is registered with a Scaling group, it can be configured to use the results of the ELB health check in addition to the EC2 instance status checks to determine the health of the EC2 instances in your Auto Scaling group.
17. You can also use Cloudwatch monitor matrix to monitor the EC2 instaces and scaling group.
18. Once you have registered the ELB with scaling group, Scaling group can be configured to use ELB metrics e.g. such as request latency or request count to scale the application automatically.
19. Using the auto scaling, we can make sure that minimum number of required instances always available. However, we can attach some policy to auto scaling group. Which can help launch and terminate EC2 instances to handle any increase or decrease in demand on your application.
20. Auto Scaling attempts to distribute instances evenly between the Availability Zones that are enabled for your Auto Scaling group. Auto Scaling does this by attempting to launch new instances in the Availability Zone with the fewest instances. If the attempt fails, however, Auto Scaling attempts to launch the instances in another Availability Zone until it succeeds
21. Launch Configuration : You can create a template known as launch configuration which has information e.g. AMI, Instance type, their key pairs, security group and block device mapping.
22. Once you created launch configuration, you can not change it. If you want change please create new one. Also same launch configuration can be attached to multiple scaling group.
23. You have to have following for auto scaling group :
- Launch Configuration, Desired Capacity,Availability Zones or Subnets,Metrics & Health Checks
24. Auto Scaling groups cannot span multiple regions
25. After changing the launch configuration of Auto Scaling group, any new instances are launched using with this new configuration parameters, but existing instances are not affected.
26. While status check of EC2 instance , scaling group finds state other than RUNNING or IMPAIRED it will be considered as unhealthy and will launch new instace as replacement.
27. If you have configured scaling group to used instance status using both instance status as well as status reported by load balancer then it will mark instance as unhealthy if instance status is other than RUNNING /IMPAIRED or reported by ELB as OutOfService. Once is marked unhealty it will be scheduled to be replaced and will not be automatically recovers its health.
28. When your instance is terminated, any associated Elastic IP addresses are disassociated and are not automatically associated with the new instance.You must associate these Elastic IP addresses with the new instance manually. Similarly, when your instance is terminated, its attached EBS volumes are detached.You must attach these EBS volumes to the new instance manually.
29. Attaching/Detaching of an EC2 instance can be done only if
- Instance is in the running state.
- AMI used to launch the instance must still exist.
- Instance is not a member of another Auto Scaling group.
- Instance is in the same Availability Zone as the Auto Scaling group.
- If the Auto Scaling group is associated with a load balancer, the instance and the load balancer must both be in the same VPC.
30. Scaling based on a schedule allows you to scale your application in response to predictable load changes. For e.g. last day of the month, last day of an financial year
31. Auto Scaling guarantees the order of execution for scheduled actions within the same group, but not for scheduled actions across groups
32. Multiple Scheduled Actions can be specified but should have unique time value and they cannot have overlapping time scheduled which will lead to its rejection
33. Dynamic Scalling : Allows you to scale automatically in response to the changing demand for e.g. scale out in case CPU utilization of the instance goes above 70% and scale in when the CPU utilization goes below 30%
34. Auto Scaling group uses a combination of alarms and policies to determine when the conditions for scaling are met.
35. An Auto Scaling group can have more than one scaling policy attached to it any given time.
36. Each Auto Scaling group would have at least two policies: one to scale your architecture out and another to scale your architecture in.
37. If an Auto Scaling group has multiple policies, there is always a chance that both policies can instruct the Auto Scaling to Scale Out or Scale In at the same time. When these situations occur, Auto Scaling chooses the policy that has the greatest impact on the Auto Scaling group. For e.g. if two policies are triggered at the same time and Policy 1 instructs to scale out the instance by 1 while Policy 2 instructs to scale out the instances by 2, Auto Scaling will use the Policy 2 and scale out the instances by 2 as it has a greater impact
38. Termination policy helps the Auto Scaling to decide which instances it should terminate first when you have Auto Scaling automatically scale in. Auto Scaling specifies a default termination policy and also allows you to create a customized one
39. Instance protection controls whether Auto Scaling can terminate a particular instance or not.
Instance protection can be enabled on a Auto Scaling group or an individual instance as well, at any time
40. Instance protection does not protect for the below cases
Manual termination through the Amazon EC2 console, the terminate-instances command, or the TerminateInstances API.
Termination if it fails health checks and must be replaced.
Spot instances in an Auto Scaling group from interruption.
41. Auto Scaling allows you to put the InService instance in the Standby state during which the instance is still a part of the Auto Scaling group but does not serve any requests. This can be used to either troubleshoot an instance or update and instance and return the instance back to service
42. If a load balancer is associated with Auto Scaling, the instance is automatically deregistered when the instance is in Standby state and registered again when the instance exits the Standby state.

EC2 Root Device Volume : 
1. /dev/sda it's 100% your internal drive. Your external drive may be sdb, sdc or another one.
1A. The disk names in Linux are alphabetical. /dev/sda is the first hard drive (the primary master)
1B. root device volume contains the image used to boot the instance.
2. Amazon EC2 instance store: which means the root device for an instance launched from the AMI is an instance store volume created from a template stored in Amazon S3. 
3. Backed by Amazon EBS : This means that the root device for an instance launched from the AMI is an Amazon EBS volume created from an Amazon EBS snapshot.
4. We recommend that you use AMIs backed by Amazon EBS, because they launch faster and use persistent storage.
5. The description of an AMI includes which type of AMI it is : root device referred to in some places as either ebs or instance store 
6. Instances that use instance stores for the root device automatically have one or more instance store volumes available, with one volume serving as the root device volume.
7. When an instance is launched, the image that is used to boot the instance is copied to the root volume. Note that you can optionally use additional instance store volumes, depending on the instance type.
8. Any data on the instance store volumes persists as long as the instance is running, but this data is deleted when the instance is terminated
9. instance store-backed instances do not support the Stop action
11. If you plan to use Amazon EC2 instance store-backed instances, we highly recommend that you distribute the data on your instance stores across multiple Availability Zones.
12. You should also back up critical data data on your instance store volumes to persistent storage on a regular basis.
13. Instances that use Amazon EBS for the root device automatically have an Amazon EBS volume attached.
14. When you launch an Amazon EBS-backed instance, we create an Amazon EBS volume for each Amazon EBS snapshot referenced by the AMI you use. 
15. An Amazon EBS-backed instance can be stopped and later restarted without affecting data stored in the attached volumes.
16. There are various instance and volume-related tasks you can do when an Amazon EBS-backed instance is in a stopped state. For example, you can modify the properties of the instance, you can change the size of your instance or update the kernel it is using, or you can attach your root volume to a different running instance for debugging or any other purpose.
17. If an Amazon EBS-backed instance fails, you can restore your session 
18. Root Device Type  :  EBS images will help you to find all the AMIs , which are backed by EBS.
19. Root Device Type  :  Instance store will help you to find all the AMIs , which are backed by Instance store.
20. By default, the root device volume for an AMI backed by Amazon EBS is deleted when the instance terminates.
21. Use the modify-instance-attribute command to preserve the root volume by including a block device mapping that sets its DeleteOnTermination attribute to false. (Hence, you can change this parameter event instance is running)
22.Amazon EC2 instances in a replication configuration utilizing two different Availability Zones

Setup : 
1. T2 instances must be launched into a VPC e.g t2.micro
2. VPCs are specific to a region, so you should select the same region in which you created your key pair.
3. Security groups act as a firewall for associated instances, controlling both inbound and outbound traffic at the instance level. 
4. Note that if you plan to launch instances in multiple regions, you'll need to create a security group in each region.
5. Secuirty Group :  Source is set to Anywhere using IP 0.0.0.0/0 .
6. Secuirty Group : To specify an individual IP address in CIDR notation, add the routing prefix /32. For example, if your IP address is 203.0.113.25, specify 203.0.113.25/32. If your company allocates addresses from a range, specify the entire range, such as 203.0.113.0/24.
7. Don't select the Proceed without a key pair option. If you launch your instance without a key pair, then you can't connect to it.

AWS Admin Permissions
1. AWS provides the root or system privileges only for a limited set of services, which includes Elastic Cloud Compute (EC2), Elastic MapReduce (EMR),Elastic BeanStalk, Opswork
2. AWS does not provide root privileges for managed services like RDS, DynamoDB, S3, Glacier etc

AWS Config:
1. AWS Config is a fully managed service that provides you with an AWS resource inventory, configuration history, and configuration change notifications to enable security and governance
2. In cases where several configuration changes are made to a resource in quick succession (i.e., within a span of few minutes), AWS Config will only record the latest configuration of that resource; this represents the cumulative impact of that entire set of changes
3. Use AWS Config for supported services and use an automated process via APIs for unsupported services

Consolidate billing
1. Allows receiving a combined view of charges incurred by all the associated accounts as well as each of the accounts.
2. This account is strictly an accounting and billing feature.
3. This is not a method for controlling accounts, or provisioning resources for accounts.
4. Payer account cannot access data belonging to the linked account owners
5. However, access to the Payer account users can be granted through Cross Account Access roles
6. Volume Pricing Discounts
- For billing purposes, AWS treats all the accounts on the consolidated bill as if they were one account.
- AWS combines the usage from all accounts to determine which volume pricing tiers to apply, giving you a lower overall price whenever possible.
7. Linked accounts receive the cost benefit from other’s Reserved Instances only if instances are launched in the same Availability Zone where the Reserved Instances were purchased
8. Capacity reservation only applies to the product platform, instance type, and Availability Zone specified in the purchase
For e.g., Amit and Rakesh each have an account on Amit’s consolidated bill. Rakesh has 5 Reserved Instances of the same type, and Amit has none. During one particular hour, Rakesh uses 3 instances and Amit uses 6, for a total of 9 instances used on Amit’s consolidated bill. AWS will bill 5 as Reserved Instances, and the remaining 4 as normal instances.
9. Paying account should be used solely for billing purposes
10. Master (Payee) account can view only the AWS billing details of the linked accounts
11. Any single instance from all the three accounts can get the benefit of AWS RI pricing if they are running in the same zone and are of the same size
12. The payee account will send a request to the linked account to be a part of consolidated billing.

Best Practices for Amazon EC2 :
1. Regularly patch, update, and secure the operating system and applications on your instance. 
2. Implement the least permissive rules for your security group.
3. Use separate Amazon EBS volumes for the operating system versus your data. Ensure that the volume with your data persists after instance termination. 
4. If you use instance store for database storage, ensure that you have a cluster with a replication factor that ensures fault tolerance.
5. Design your applications to handle dynamic IP addressing when your instance restarts. 
6. The best practice for securing your web server is to install support for HTTPS (HTTP Secure), which protects your data with SSL/TLS encryption.

Tutorial : 
1. We strongly recommend that you associate an Elastic IP address (EIP) to the instance you are using to host a WordPress blog. This prevents the public DNS address for your instance from changing and breaking your installation. If you own a domain name and you want to use it for your blog, you can update the DNS record for the domain name to point to your EIP address (for help with this, contact your domain name registrar). You can have one EIP address associated with a running instance at no charge. 
2. If you don't already have a domain name for your blog, you can register a domain name with Amazon Route 53 and associate your instance's EIP address with your domain name.
3. Move your MySQL database to Amazon RDS to take advantage of the service's ability to scale automatically.
4. Your WordPress installation is automatically configured using the public DNS address for your EC2 instance. If you stop and restart the instance, the public DNS address changes (unless it is associated with an Elastic IP address) and your blog will not work anymore because it references resources at an address that no longer exists (or is assigned to another EC2 instance). 
5.  The POODLE attack, discovered in 2014, exploits a weakness in SSL version 3 that allows an attacker to impersonate a web site. The fix is straightforward: Disable SSL version 3 support on the server. In the configuration file /etc/httpd/conf.d/ssl.conf, comment out the following by typing "#" at the beginning of the line:
SSLProtocol all -SSLv2
Then, add the following directive:
SSLProtocol -SSLv2 -SSLv3 +TLSv1 +TLSv1.1 +TLSv1.2
6. My Apache webserver won't start unless I supply a password.
This is expected behavior if you installed an encrypted, password-protected, private server key.
7. You can launch multiple EC2 instances from your AMI and then use Elastic Load Balancing to distribute incoming traffic for your application across these EC2 instances. 
8. You can use Auto Scaling to maintain a minimum number of running instances for your application at all times. Auto Scaling can detect when your instance or application is unhealthy and replace it automatically to maintain the availability of your application. You can also use Auto Scaling to scale your Amazon EC2 capacity up or down automatically based on demand, using criteria that you specify.
9. Auto Scaling with Elastic Load Balancing to ensure that you maintain a specified number of healthy EC2 instances behind your load balancer. Note that these instances do not need public IP addresses, because traffic goes to the load balancer and is then routed to the instances.
10. subnet can be across Az
11.  Increase the Availability of Your Application on Amazon EC2 , create a VPC with one public subnet in two or more Availability Zones. 
12. When you use ELB and Auto scaling , pre-requisite is an AMI which will be used by Auto-scaling to launch new instance based on AMI.
13. If you have some scripts, which needs to be executed as soon as your instance started. Please add this script in User data, while configuring Auto-scaling.
14. When you are using load balancer in front of your instances, then it is needed that in your security group , you must allow HTTP traffic and health checks from the load balancer.
15. You must assign the IAM role when you create the new instance. You can't assign a role to an instance that is already running. For existing instances, you must create an image of the instance, launch an instance from that image, and assign the IAM role as you launch the instance. 
16. Instances require an AWS Identity and Access Management (IAM) role that enables the instance to communicate with Amazon EC2 Simple Systems Manager (SSM).
17. 





AMI Types : 
1. Boot time -> Amazon EBS-Backed: Usually less than 1 minute , Amazon Instance Store-Backed : Usually less than 5 minutes
2. Upgrading -> Amazon EBS-Backed: The instance type, kernel, RAM disk, and user data can be changed while the instance is stopped. Amazon Instance Store-Backed : Instance attributes are fixed for the life of an instance.
3. Amazon EBS-backed AMIs launch faster than Amazon EC2 instance store-backed AMIs. 
4. When you launch an Amazon EC2 instance store-backed AMI, all the parts have to be retrieved from Amazon S3 before the instance is available. With an Amazon EBS-backed AMI, only the parts required to boot the instance need to be retrieved from the snapshot before the instance is available.
5. For the best performance, we recommend that you use current generation instance types and HVM AMIs when you launch your instances. 
6. Linux Amazon Machine Images use one of two types of virtualization: paravirtual (PV) or hardware virtual machine (HVM). The main difference between PV and HVM AMIs is the way in which they boot and whether they can take advantage of special hardware extensions (CPU, network, and storage) for better performance.
7. Unlike PV guests, HVM guests can take advantage of hardware extensions that provide fast access to the underlying hardware on the host system. 
8. Amazon can't vouch for the integrity or security of AMIs shared by other Amazon EC2 users. Therefore, you should treat shared AMIs as you would any foreign code that you might consider deploying in your own data center and perform the appropriate due diligence.
9. Amazon's public images have an aliased owner, which appears as amazon in the account field. This enables you to find AMIs from Amazon easily. Other users can't alias their AMIs.
10. Before you use a shared AMI, take the following steps to confirm that there are no pre-installed credentials that would allow unwanted access to your instance by a third party and no pre-configured remote logging that could transmit sensitive data to a third party.
11. To ensure that you don't accidentally lose access to your instance, we recommend that you initiate two SSH sessions and keep the second session open until you've removed credentials that you don't recognize and confirmed that you can still log into your instance using SSH.
12. Disable password-based authentication for the root user.
13. To prevent preconfigured remote logging, you should delete the existing configuration file and restart the rsyslog service.
14. AMIs are a regional resource. Therefore, sharing an AMI makes it available in that region. To make an AMI available in a different region, copy the AMI to the region and then share it. 
15. If an AMI has a product code, you can't make it public. You must share the AMI with only specific AWS accounts.
16. You do not need to share the Amazon EBS snapshots that an AMI references in order to share the AMI. Only the AMI itself needs to be shared; the system automatically provides the instance access to the referenced Amazon EBS snapshots for the launch.
17. If you have created a public AMI, or shared an AMI with another AWS user, you can create a bookmark that allows a user to access your AMI and launch an instance in their own account immediately. This is an easy way to share AMI references, so users don't have to spend time finding your AMI in order to use it.
18. For AMIs backed by instance store, we recommend that your AMIs download and upgrade the Amazon EC2 AMI creation tools during startup. This ensures that new AMIs based on your shared AMIs have the latest AMI tools.
19. Using a fixed root password for a public AMI is a security risk that can quickly become known. Even relying on users to change the password after the first login opens a small window of opportunity for potential abuse. To solve this problem, disable password-based remote logins for the root user.
20. When you work with shared AMIs, a best practice is to disable direct root logins. 
21. If you plan to share an AMI derived from a public AMI, remove the existing SSH host key pairs located in /etc/ssh. This forces SSH to generate new unique SSH key pairs when someone launches an instance using your AMI, improving security and reducing the likelihood of "man-in-the-middle" attacks.
22. If you forget to remove the existing SSH host key pairs from your public AMI, our routine auditing process notifies you and all customers running instances of your AMI of the potential security risk. After a short grace period, we mark the AMI private.
23. Currently, there is no easy way to know who provided a shared AMI, because each AMI is represented by an account ID. We recommend that you post a description of your AMI, and the AMI ID, in the Amazon EC2 forum. This provides a convenient central location for users who are interested in trying new shared AMIs. You can also post the AMI to the Amazon Machine Images (AMIs) page.
24. We recommend using the --exclude directory option on ec2-bundle-vol to skip any directories and subdirectories that contain secret information that you would not like to include in your bundle. In particular, exclude all user-owned SSH public/private key pairs and SSH authorized_keys files when bundling the image. The Amazon public AMIs store these in /root/.ssh for the root account, and /home/user_name/.ssh/ for regular user accounts.
25. Always delete the shell history before bundling. If you attempt more than one bundle upload in the same AMI, the shell history contains your secret access key.
26. Bundling a running instance requires your private key and X.509 certificate. Put these and other credentials in a location that is not bundled (such as the instance store).
27. The developer of a paid AMI can enable you to purchase a paid AMI that isn't listed in AWS Marketplace. The developer provides you with a link that enables you to purchase the product through Amazon
28. You can retrieve the AWS Marketplace product code for your instance using its instance metadata.
29. During the AMI-creation process, Amazon EC2 creates snapshots of your instance's root volume and any other EBS volumes attached to your instance. If any volumes attached to the instance are encrypted, the new AMI only launches successfully on instances that support Amazon EBS encryption. 
30. In the navigation pane, choose Instances and select your instance. Choose Actions, Image, and Create Image. If this option is disabled, your instance isn't an Amazon EBS-backed instance.
31. You can convert an instance store-backed Linux AMI that you own to an Amazon EBS-backed Linux AMI.
32. You can't convert an instance store-backed Windows AMI to an Amazon EBS-backed Windows AMI and you cannot convert an AMI that you do not own.
33. AMIs that are backed by Amazon EBS snapshots can take advantage of Amazon EBS encryption. Snapshots of both data and root volumes can be encrypted and attached to an AMI.
34. The CopyImage action can be used to create an AMI with encrypted snapshots from an AMI with unencrypted snapshots.
35. You can create an AMI from a running Amazon EC2 instance (with or without encrypted volumes) using either the Amazon EC2 console or the command line
36. This scenario starts with an AMI backed by a root-volume snapshot (encrypted to key #1), and finishes with an AMI that has two additional data-volume snapshots attached (encrypted to key #2 and key #3). The CopyImage action cannot apply more than one encryption key in a single operation. However, you can create an AMI from an instance that has multiple attached volumes encrypted to different keys. The resulting AMI has snapshots encrypted to those keys and any instance launched from this new AMI also has volumes encrypted to those keys.
37. You can copy an Amazon Machine Image (AMI) within or across an AWS region using the AWS Management Console, the command line, or the Amazon EC2 API, all of which support the CopyImage action. Both Amazon EBS-backed AMIs and instance store-backed AMIs can be copied.
38. The source AMI can be changed or deregistered with no effect on the target AMI. The reverse is also true.
39. AWS does not copy launch permissions, user-defined tags, or Amazon S3 bucket permissions from the source AMI to the new AMI.
40. You can copy an AMI across AWS accounts. This includes AMIs with encrypted snapshots, but does not include encrypted AMIs.
41. You can't copy an encrypted AMI between accounts. Instead, if the underlying snapshot and encryption key have been shared with you, you can copy the snapshot to another account while re-encrypting it with a key of your own, and then register this privately owned snapshot as a new AMI.
42. Consistent global deployment: Copying an AMI from one region to another enables you to launch consistent instances based from the same AMI into different regions.
43.  When you launch an instance from an AMI, it resides in the same region where the AMI resides. If you make changes to the source AMI and want those changes to be reflected in the AMIs in the target regions, you must recopy the source AMI to the target regions.
44. When you first copy an instance store-backed AMI to a region, we create an Amazon S3 bucket for the AMIs copied to that region. All instance store-backed AMIs that you copy to that region are stored in this bucket. 
45. Prior to copying an AMI, you must ensure that the contents of the source AMI are updated to support running in a different region. For example, you should update any database connection strings or similar application configuration data to point to the appropriate resources. Otherwise, instances launched from the new AMI in the destination region may still use the resources from the source region, which can impact performance and cost.
46. Encrypting during copying applies only to Amazon EBS-backed AMIs. Because an instance-store-backed AMIs does not rely on snapshots, the CopyImage action cannot be used to change its encryption status.
47. The following table shows encryption support for various scenarios. Note that while it is possible to copy an unencrypted snapshot to yield an encrypted snapshot, you cannot copy an encrypted snapshot to yield an unencrypted one.

Scenario Description Supported
1 Unencrypted-to-unencrypted Yes
2 Encrypted-to-encrypted Yes
3 Unencrypted-to-encrypted Yes
4 Encrypted-to-unencrypted No 

48. You can deregister an AMI when you have finished using it. After you deregister an AMI, you can't use it to launch new instances.
49. When you deregister an AMI, it doesn't affect any instances that you've already launched from the AMI. 
50. When you deregister an Amazon EBS-backed AMI, it doesn't affect the snapshot that was created for the root volume of the instance during the AMI creation process. You'll continue to incur storage costs for this snapshot. Therefore, if you are finished with the snapshot, you should delete it.
51. When you deregister an instance store-backed AMI, it doesn't affect the files that you uploaded to Amazon S3 when you created the AMI. You'll continue to incur usage costs for these files in Amazon S3. Therefore, if you are finished with these files, you should delete them.

Instance Types
1. Amazon EC2 dedicates some resources of the host computer, such as CPU, memory, and instance storage, to a particular instance. Amazon EC2 shares other resources of the host computer, such as the network and the disk subsystem, among instances. 
2. Each instance type supports one or both of the following types of virtualization: paravirtual (PV) or hardware virtual machine (HVM). The virtualization type of your instance is determined by the AMI that you use to launch it.
3. To obtain additional, dedicated capacity for Amazon EBS I/O, you can launch some instance types as EBS–optimized instances. Some instance types are EBS–optimized by default. 
4. To maximize the networking and bandwidth performance of your instance type, you can do the following:
- Launch supported instance types into a placement group to optimize your instances for high performance computing (HPC) applications. Instances in a common placement group can benefit from high-bandwidth (10 Gbps), low-latency networking.  Instance types that support 10 Gbps network speeds can only take advantage of those network speeds when launched in a placement group.
- Enable enhanced networking for supported current generation instance types to get significantly higher packet per second (PPS) performance, lower network jitter, and lower latencies.
5. T2 instances are designed to provide moderate baseline performance and the capability to burst to significantly higher performance as required by your workload. They are intended for workloads that don't use the full CPU often or consistently, but occasionally need to burst. T2 instances are well suited for general purpose workloads, such as web servers, developer environments, and small databases. 
6. You must launch your T2 instances using an EBS volume as the root device.
7. T2 instances are available as On-Demand instances and Reserved Instances, but they are not available as Spot instances, Scheduled Instances, or Dedicated instances. They are also not supported on a Dedicated Host.
8. There is a limit on the total number of instances that you can launch in a region, and there are additional limits on some instance types. 
9. C4 instances are ideal for compute-bound applications that benefit from high performance processors. C4 instances are well suited for the following applications:
- Batch processing workloads
- Media transcoding
- High-traffic web servers, massively multiplayer online (MMO) gaming servers, and ad serving engines
- High performance computing (HPC) and other compute-intensive applications
10. C4 instances are EBS-optimized by default, and deliver dedicated block storage throughput to Amazon EBS ranging from 500 Mbps to 4,000 Mbps at no additional cost. EBS-optimized instances enable you to get consistently high performance for your EBS volumes by eliminating contention between Amazon EBS I/O and other network traffic from your C4 instance. 
11. You can cluster C4 instances in a placement group. Placement groups provide low latency and high-bandwidth connectivity between the instances within a single Availability Zone. 
12. If you require high parallel processing capability, you'll benefit from using accelerated computing instances, which provide access to NVIDIA GPUs. You can use accelerated computing instances to accelerate many scientific, engineering, and rendering applications by leveraging the CUDA or Open Computing Language (OpenCL) parallel computing frameworks. You can also use them for graphics applications, including game streaming, 3-D application streaming, and other graphics workloads.
13. Accelerated computing instance families use hardware accelerators, or co-processors, to perform some functions, such as floating point number calculation and graphics processing, more efficiently than is possible in software running on CPUs.
14. I2 instances are optimized to deliver tens of thousands of low-latency, random I/O operations per second (IOPS) to applications. They are well suited for the following scenarios:
- NoSQL databases (for example, Cassandra and MongoDB)
- Clustered databases
- Online transaction processing (OLTP) systems
15. I2 Instance Features
- The primary data storage is SSD-based instance storage. Like all instance storage, these volumes persist only for the life of the instance. 
16. D2 instances are designed for workloads that require high sequential read and write access to very large data sets on local storage. D2 instances are well suited for the following applications:
- Massive parallel processing (MPP) data warehouse
- MapReduce and Hadoop distributed computing
- Log or data processing applications
17. The primary data storage for D2 instances is HDD-based instance storage. 
18. HI1 instances (hi1.4xlarge) can deliver tens of thousands of low-latency, random I/O operations per second (IOPS) to applications. They are well suited for the following scenarios:
- NoSQL databases (for example, Cassandra and MongoDB)
- Clustered databases
- Online transaction processing (OLTP) systems
19. HS1 instances (hs1.8xlarge) provide very high storage density and high sequential read and write performance per instance. They are well suited for the following scenarios:
- Data warehousing
- Hadoop/MapReduce
- Parallel file systems
20. X1 instances are designed to deliver fast performance for workloads that process large data sets in memory. X1 instances are well suited for the following applications:
- In-memory databases such SAP HANA, including SAP-certified support for Business Suite S/4HANA, Business Suite on HANA (SoH), Business Warehouse on HANA (BW), and Data Mart Solutions on HANA. For more information, see SAP HANA on the AWS Cloud.
- Big-data processing engines such as Apache Spark or Presto.
- High-performance computing (HPC) applications.
21. X1 instances include Intel Scalable M??emory Buffers, providing 300 GiB/s of sustainable memory-read bandwidth and 140 GiB/s of sustainable memory-write bandwidth.
22. If the root device for your instance is an EBS volume, you can change the size of the instance simply by changing its instance type, which is known as resizing it. 
23. If the root device for your instance is an instance store volume, you must migrate your application to a new instance with the instance type that you want. 
24. When you resize an instance, the resized instance usually has the same number of instance store volumes that you specified when you launched the original instance. If you want to add instance store volumes, you must migrate your application to a completely new instance with the instance type and instance store volumes that you want. An exception to this rule is when you resize to a storage-intensive instance type that by default contains a higher number of volumes. 
25. You can't resize an instance that was launched from a PV AMI to an instance type that is HVM only. 
26. You must stop your Amazon EBS–backed instance before you can change its instance type.
27. When you stop and start an instance, be aware of the following:
- We move the instance to new hardware; however, the instance ID does not change.
- If your instance is running in a VPC and has a public IP address, we release the address and give it a new public IP address. The instance retains its private IP addresses and any Elastic IP addresses.
- If your instance is running in EC2-Classic, we give it new public and private IP addresses, and disassociate any Elastic IP address that's associated with the instance. Therefore, to ensure that your users can continue to use the applications that you're hosting on your instance uninterrupted, you must re-associate any Elastic IP address after you restart your instance.
- If your instance is in an Auto Scaling group, the Auto Scaling service marks the stopped instance as unhealthy, and may terminate it and launch a replacement instance. To prevent this, you can suspend the Auto Scaling processes for the group while you're resizing your instance.
28. To ensure that your users can continue to use the applications that you're hosting on your instance uninterrupted, you must take any Elastic IP address that you've associated with your original instance and associate it with the new instance. 
29. If the current configuration of your instance is incompatible with the new instance type that you want, then you can't resize the instance to that instance type. Instead, you can migrate your application to a new instance with a configuration that is compatible with the new instance type that you want.
30. Amazon EC2 provides the following purchasing options to enable you to optimize your costs based on your needs:
- On-Demand instances — Pay, by the hour, for the instances that you launch.
- Reserved Instances — Purchase, at a significant discount, instances that are always available, for a term from one to three years.
- Scheduled Instances — Purchase instances that are always available on the specified recurring schedule, for a one-year term.
- Spot instances — Bid on unused instances, which can run as long as they are available and your bid is above the Spot price, at a significant discount.
- Dedicated hosts — Pay for a physical host that is fully dedicated to running your instances, and bring your existing per-socket, per-core, or per-VM software licenses to reduce costs.
- Dedicated instances — Pay, by the hour, for instances that run on single-tenant hardware.
31. If you require a capacity reservation, consider Reserved Instances or Scheduled Instances.
32. Spot instances are a cost-effective choice if you can be flexible about when your applications run and if they can be interrupted. 
33. Dedicated hosts can help you address compliance requirements and reduce costs by using your existing server-bound software licenses.
34.  Amazon EC2 launches the instances and then terminates them three minutes before the time period ends.
35. Reserved Instances provide you with a significant discount compared to On-Demand instance pricing. In addition, when Reserved Instances are assigned to a specific Availability Zone, they provide a capacity reservation. 
36. Each Reserved Instance that is specific to an Availability Zone can also provide a capacity reservation.
37. Generally speaking, you can save more money choosing Reserved Instances with a higher upfront payment. There are three payment options (No Upfront, Partial Upfront, All Upfront) and two term lengths (one-year or three-years). Y
38. You can find Reserved Instances offered by third-party sellers at shorter term lengths and lower prices as well. 
39. When you purchase a Reserved Instance, the reservation is automatically applied to running instances that match your specified parameters.
40. Reserved Instances do not renew automatically; you can continue using the EC2 instance without interruption, but you will be charged On-Demand rates. 
41. You can use Auto Scaling or other AWS services to launch the On-Demand instances that use your Reserved Instance benefits. 
42. Both Standard and Convertible Reserved Instances can be purchased to apply to instances in a specific Availability Zone, or to instances in a region. Reserved Instances purchased for a specific Availability Zone can be modified to apply to a region—but doing so removes the associated capacity reservation.
43. Convertible Reserved Instances can be exchanged for other Convertible Reserved Instances with entirely different configurations, including instance type, platform, or tenancy. It is not possible to exchange Standard Reserved Instances in this way.
44. Convertible Reserved Instances— Only Amazon EC2 Standard Reserved Instances can be sold in the Reserved Instance Marketplace. Convertible Reserved Instances cannot be sold.
45.  Other AWS Reserved Instances, such as Amazon RDS and Amazon ElastiCache Reserved Instances cannot be sold in the Reserved Instance Marketplace.
46. Convertible Reserved Instances are not available for purchase in the Reserved Instance Marketplace.
47. When your computing needs change, you can modify your Standard Reserved Instances and continue to benefit from your capacity reservation. Convertible Reserved Instances can be modified using the exchange process. 
48. Scheduled Instances are a good choice for workloads that do not run continuously, but do run on a regular schedule. For example, you can use Scheduled Instances for an application that runs during business hours or for batch processing that runs at the end of the week.
49. Scheduled Reserved Instances (Scheduled Instances) enable you to purchase capacity reservations that recur on a daily, weekly, or monthly basis, with a specified start time and duration, for a one-year term. 
50. If you are flexible about when your instances run, Spot instances might meet your needs and decrease costs.
51. Amazon EC2 sets aside pools of EC2 instances in each Availability Zone for use as Scheduled Instances. Each pool supports a specific combination of instance type, operating system, and network (EC2-Classic or EC2-VPC).
52. You can't stop or reboot Scheduled Instances, but you can terminate them manually as needed. 
53.  If you terminate a Scheduled Instance before its current scheduled time period ends, you can launch it again after a few minutes. Otherwise, you must wait until the next scheduled time period.
54. Scheduled Instances are subject to the following limits:
The following are the only supported instance types: C3, C4, M4, and R3.
The required term is 365 days (one year).
The minimum required utilization is 1,200 hours per year.
You can purchase a Scheduled Instance up to three months in advance.
55. Spot instances enable you to bid on unused EC2 instances, which can lower your Amazon EC2 costs significantly. The hourly price for a Spot instance (of each instance type in each Availability Zone) is set by Amazon EC2, and fluctuates depending on the supply of and demand for Spot instances. Your Spot instance runs whenever your bid exceeds the current market price.
56. Spot instances are a cost-effective choice if you can be flexible about when your applications run and if your applications can be interrupted. For example, Spot instances are well-suited for data analysis, batch jobs, background processing, and optional tasks
57. Spot instance interruption—Amazon EC2 terminates your Spot instance when the Spot price exceeds your bid price or there are no longer any unused EC2 instances. Amazon EC2 marks the Spot instance for termination and provides a Spot instance termination notice, which gives the instance a two-minute warning before it terminates.
58. You can create launch configurations with a bid price so that Auto Scaling can launch Spot instances. 
59. There are scenarios where it can be useful to run Spot instances in an Amazon EMR cluster.
60. AWS CloudFormation enables you to create and manage a collection of AWS resources using a template in JSON format. AWS CloudFormation templates can include a Spot price. 
61. When you use Spot instances, you must be prepared for interruptions. Amazon EC2 can interrupt your Spot instance when the Spot price rises above your bid price, when the demand for Spot instances rises, or when the supply of Spot instances decreases.
62. Note that you can't stop and start an Amazon EBS-backed instance if it is a Spot instance, but you can reboot or terminate it.
63. Note that you can't stop and start an Amazon EBS-backed instance if it is a Spot instance, but you can reboot or terminate it.
Shutting down a Spot instance on OS-level results in the Spot instance being terminated. It is not possible to change this behavior.
64. Specify an Availability Zone group in your Spot instance request to tell the Spot service to launch a set of Spot instances in the same Availability Zone. Note that Amazon EC2 need not terminate all instances in an Availability Zone group at the same time. If Amazon EC2 must terminate one of the instances in an Availability Zone group, the others remain running.
65. A Spot fleet is a collection, or fleet, of Spot instances. The Spot fleet attempts to launch the number of Spot instances that are required to meet the target capacity that you specified in the Spot fleet request. The Spot fleet also attempts to maintain its target capacity fleet if your Spot instances are interrupted due to a change in Spot prices or available capacity.
66.A Spot instance pool is a set of unused EC2 instances with the same instance type, operating system, Availability Zone, and network platform (EC2-Classic or EC2-VPC). 
67. The following instance types are not supported for Spot:
T2 , HS1
68. By default, there is an account limit of 20 Spot instances per region.
69. You can specify encrypted EBS volumes in the launch specification for your Spot instances, but these volumes are not encrypted.
70. An Amazon EC2 Dedicated Host is a physical server with EC2 instance capacity fully dedicated to your use. Dedicated Hosts allow you to use your existing per-socket, per-core, or per-VM software licenses
71. Dedicated Hosts and Dedicated Instances can both be used to launch Amazon EC2 instances onto physical servers that are dedicated for your use.
There are no performance, security, or physical differences between Dedicated Instances and instances on Dedicated Hosts. However, Dedicated Hosts give you additional visibility and control over how instances are placed on a physical server.
72. When you use Dedicated Hosts, you have control over instance placement on the host using the Host Affinity and Instance Auto-placement settings.
73. With Dedicated Instances, you don't have control over which host your instance launches and runs on. If your organization wants to use AWS, but has an existing software license with hardware compliance requirements, this allows visibility into the host's hardware so you can meet those requirements.
74. Dedicated Host Reservations provide a billing discount compared to running On-Demand Dedicated Hosts. Reservations are available in three payment options:
No Upfront—No Upfront Reservations provide you with a discount on your Dedicated Host usage over a term and do not require an upfront payment. Available for a one-year term only.
Partial Upfront—A portion of the reservation must be paid upfront and the remaining hours in the term are billed at a discounted rate. Available in one-year and three-year terms.
All Upfront—Provides the lowest effective price. Available in one-year and three-year terms and covers the entire cost of the term upfront, with no additional charges going forward.
75. To use a Dedicated Host, you first allocate hosts for use in your account. You then launch instances onto the hosts by specifying host tenancy for the instance. The instance auto-placement setting allows you to control whether an instance can launch onto a particular host. When an instance is stopped and restarted, the Host affinity setting determines whether it's restarted on the same, or a different, host. 

Monitoring (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring_automated_manual.html)
1. System Status Checks - monitor the AWS systems required to use your instance to ensure they are working properly. These checks detect problems with your instance that require AWS involvement to repair. When a system status check fails, you can choose to wait for AWS to fix the issue or you can resolve it yourself (for example, by stopping and restarting or terminating and replacing an instance). Examples of problems that cause system status checks to fail include:
Loss of network connectivity
Loss of system power
Software issues on the physical host
Hardware issues on the physical host
2. Instance Status Checks - monitor the software and network configuration of your individual instance. These checks detect problems that require your involvement to repair. When an instance status check fails, typically you will need to address the problem yourself (for example by rebooting the instance or by making modifications in your operating system). Examples of problems that may cause instance status checks to fail include:
Failed system status checks
Misconfigured networking or startup configuration
Exhausted memory
Corrupted file system
Incompatible kernel
3. Amazon EC2 Monitoring Scripts - Perl scripts that can monitor memory, disk, and swap file usage in your instances. 
4. Amazon CloudWatch Logs - monitor, store, and access your log files from Amazon EC2 instances, AWS CloudTrail, or other sources.
5. manual monitoring : Amazon EC2 Dashboard shows:
Service Health and Scheduled Events by region
Instance state
Status checks
Alarm status
Instance metric details (In the navigation pane click Instances, select an instance, and then click the Monitoring tab)
Volume metric details (In the navigation pane click Volumes, select a volume, and then click the Monitoring tab)
Amazon CloudWatch Dashboard shows:
Current alarms and status
Graphs of alarms and resources
Service health status
6. A status check gives you the information that results from automated checks performed by Amazon EC2. 
7. You can also see status on specific events scheduled for your instances. Events provide information about upcoming activities such as rebooting or retirement that are planned for your instances, along with the scheduled start and end time of each event.
8. With instance status monitoring, you can quickly determine whether Amazon EC2 has detected any problems that might prevent your instances from running applications.
9. Amazon EC2 performs automated checks on every running EC2 instance to identify hardware and software issues.
10. AWS Cloudwatch monitoring :  CPU utilization, network traffic, and disk activity
11. Status checks are performed every minute and each returns a pass or a fail status. If all checks pass, the overall status of the instance is OK. If one or more checks fail, the overall status is impaired.
12. Status checks are built into Amazon EC2, so they cannot be disabled or deleted. 
13. You can, however create or delete alarms that are triggered based on the result of the status checks. For example, you can create an alarm to warn you if status checks fail on a specific instance.
14. Amazon EC2 supports the following types of scheduled events for your instances:
Instance stop: The instance will be stopped. When you start it again, it's migrated to a new host computer. Applies only to instances backed by Amazon EBS.
Instance retirement: The instance will be stopped or terminated.
Reboot: Either the instance will be rebooted (instance reboot) or the host computer for the instance will be rebooted (system reboot).
System maintenance: The instance might be temporarily affected by network maintenance or power maintenance.
15. When AWS detects irreparable failure of the underlying host computer for your instance, it schedules the instance to stop or terminate, depending on the type of root device for the instance. If the root device is an EBS volume, the instance is scheduled to stop. If the root device is an instance store volume, the instance is scheduled to terminate.
16. Actions for Instances Backed by Amazon EBS
You can wait for the maintenance to occur as scheduled. Alternatively, you can stop and start the instance, which migrates it to a new host computer. For more information about stopping your instance, as well as information about the changes to your instance configuration when it's stopped, see Stop and Start Your Instance.
17. Actions for Instances Backed by Instance Store
You can wait for the maintenance to occur as scheduled. Alternatively, if you want to maintain normal operation during a scheduled maintenance window, you can launch a replacement instance from your most recent AMI, migrate all necessary data to the replacement instance before the scheduled maintenance window, and then terminate the original instance.
18. By default, Amazon EC2 sends metric data to CloudWatch in 5-minute periods. To send metric data for your instance to CloudWatch in 1-minute periods, you can enable detailed monitoring on the instance.
19. Basic : Data is available automatically in 5-minute periods at no charge.
Detailed : Data is available in 1-minute periods for an additional cost. To get this level of data, you must specifically enable it for the instance. For the instances where you've enabled detailed monitoring, you can also get aggregated data across groups of similar instances.
20. Aggregate statistics are available for the instances that have detailed monitoring enabled. Instances that use basic monitoring are not included in the aggregates.
21. In addition, Amazon CloudWatch does not aggregate data across regions. Therefore, metrics are completely separate between regions.
22. Because no dimension is specified, CloudWatch returns statistics for all dimensions in the AWS/EC2 namespace.
23. This technique for retrieving all dimensions across an AWS namespace does not work for custom namespaces that you publish to Amazon CloudWatch. With custom namespaces, you must specify the complete set of dimensions that are associated with any given data point to retrieve statistics that include the data point.
24. You can aggregate statistics for the EC2 instances in an Auto Scaling group. Note that Amazon CloudWatch cannot aggregate data across regions. Metrics are completely separate between regions.
25. After you launch an instance, you can open the Amazon EC2 console and view the monitoring graphs for an instance on the Monitoring tab. Each graph is based on one of the available Amazon EC2 metrics.
The following graphs are available:
Average CPU Utilization (Percent)
Average Disk Reads (Bytes)
Average Disk Writes (Bytes)
Maximum Network In (Bytes)
Maximum Network Out (Bytes)
Summary Disk Read Operations (Count)
Summary Disk Write Operations (Count)
Summary Status (Any)
Summary Status Instance (Count)
Summary Status System (Count)
26. You can create a CloudWatch alarm that monitors CloudWatch metrics for one of your instances. CloudWatch will automatically send you a notification when the metric reaches a threshold you specify. You can create a CloudWatch alarm using the Amazon EC2 console, or using the more advanced options provided by the CloudWatch console.
27. You can use the stop or terminate actions to help you save money when you no longer need an instance to be running. You can use the reboot and recover actions to automatically reboot those instances or recover them onto new hardware if a system impairment occurs.
28. You can add the stop, terminate, reboot, or recover actions to any alarm that is set on an Amazon EC2 per-instance metric, including basic and detailed monitoring metrics provided by Amazon CloudWatch (in the AWS/EC2 namespace), as well as any custom metrics that include the “InstanceId=” dimension, as long as the InstanceId value refers to a valid running Amazon EC2 instance.
29. If you want to use an IAM role to stop, terminate, or reboot an instance using an alarm action, you can only use the EC2ActionsAccess role.Other IAM roles are not supported. If you are using another IAM role, you cannot stop, terminate, or reboot the instance. However, you can still see the alarm state and perform any other actions such as Amazon SNS notifications or Auto Scaling policies.
30. If you are using temporary security credentials granted using the AWS Security Token Service (AWS STS), you cannot recover an Amazon EC2 instance using alarm actions.
31. You can create an alarm that stops an Amazon EC2 instance when a certain threshold has been met. For example, you may run development or test instances and occasionally forget to shut them off. You can create an alarm that is triggered when the average CPU utilization percentage has been lower than 10 percent for 24 hours, signaling that it is idle and no longer in use. You can adjust the threshold, duration, and period to suit your needs, plus you can add an Amazon Simple Notification Service (Amazon SNS) notification, so that you will receive an email when the alarm is triggered.
32. You can create an alarm that terminates an EC2 instance automatically when a certain threshold has been met (as long as termination protection is not enabled for the instance). For example, you might want to terminate an instance when it has completed its work, and you don’t need the instance again. If you might want to use the instance later, you should stop the instance instead of terminating it.
33. You can create an Amazon CloudWatch alarm that monitors an Amazon EC2 instance and automatically reboots the instance. The reboot alarm action is recommended for Instance Health Check failures (as opposed to the recover alarm action, which is suited for System Health Check failures). An instance reboot is equivalent to an operating system reboot. In most cases, it takes only a few minutes to reboot your instance. When you reboot an instance, it remains on the same physical host, so your instance keeps its public DNS name, private IP address, and any data on its instance store volumes.
34. Rebooting an instance doesn't start a new instance billing hour, unlike stopping and restarting your instance. 
35. You can create an Amazon CloudWatch alarm that monitors an Amazon EC2 instance and automatically recovers the instance if it becomes impaired due to an underlying hardware failure or a problem that requires AWS involvement to repair. Terminated instances cannot be recovered. A recovered instance is identical to the original instance, including the instance ID, private IP addresses, Elastic IP addresses, and all instance metadata.
36. When the StatusCheckFailed_System alarm is triggered, and the recover action is initiated, you will be notified by the Amazon SNS topic that you chose when you created the alarm and associated the recover action. During instance recovery, the instance is migrated during an instance reboot, and any data that is in-memory is lost. When the process is complete, information is published to the SNS topic you've configured for the alarm. Anyone who is subscribed to this SNS topic will receive an email notification that includes the status of the recovery attempt and any further instructions. You will notice an instance reboot on the recovered instance. If your instance has a public IP address, it retains the public IP address after recovery.
37. To avoid a race condition between the reboot and recover actions, we recommend that you set the alarm threshold to 2 for 1 minute when creating alarms that recover an Amazon EC2 instance.
38. You can view alarm and action history in the Amazon CloudWatch console. Amazon CloudWatch keeps the last two weeks' worth of alarm and action history.
39. The MemoryUtilization metric is a custom metric. In order to use the MemoryUtilization metric, you must install the Perl scripts for Linux instances. 

Network and Security :
1.If you access Amazon EC2 using the command line tools or an API, you'll need your access key ID and secret access key
2. Instances can fail or terminate for reasons outside of your control. If an instance fails and you launch a replacement instance, the replacement has a different public IP address than the original. 
3. However, if your application needs a static IP address, you can use an Elastic IP address.
4. You can use security groups to control who can access your instances. These are analogous to an inbound network firewall that enables you to specify the protocols, ports, and source IP ranges that are allowed to reach your instances.
5. You can create multiple security groups and assign different rules to each group. You can then assign each instance to one or more security groups, and we use the rules to determine which traffic is allowed to reach the instance. You can configure a security group so that only specific IP addresses or specific security groups have access to the instance.

Key-Pair : http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html

1. Amazon EC2 stores the public key only, and you store the private key.
2. Anyone who possesses your private key can decrypt your login information, so it's important that you store your private keys in a secure place.
3. You can have up to five thousand key pairs per region.
4. When you launch an instance, you should specify the name of the key pair you plan to use to connect to the instance. If you don't specify the name of an existing key pair when you launch an instance, you won't be able to connect to the instance. 
5. Amazon EC2 doesn't keep a copy of your private key; therefore, if you lose a private key, there is no way to recover it. 
6. If you lose the private key for an instance store-backed instance, you can't access the instance; you should terminate the instance and launch another instance using a new key pair. 
7. If you lose the private key for an EBS-backed Linux instance, you can regain access to your instance.
8. If you have several users that require access to a single instance, you can add user accounts to your instance. (By sharing private keys among users)
9.  You can create a key pair for each user, and add the public key information from each key pair to the .ssh/authorized_keys file for each user on your EC2 instance. 
10. You can then distribute the private key files to your users. That way, you do not have to distribute the same private key file that's used for the root account to multiple users.
11. Amazon EC2 does not accept DSA keys. Make sure your key generator is set up to create RSA keys. Supported lengths: 1024, 2048, and 4096.
12. The public key that you specified when you launched an instance is also available to you through its instance metadata. To view the public key that you specified when launching the instance, use the following command from your instance:
GET http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key
13.  if you change the key pair that you use to connect to the instance, we don't update the instance metadata to show the new public key; you'll continue to see the public key for the key pair you specified when you launched the instance in the instance metadata.
14. When you delete a key pair, you are only deleting Amazon EC2's copy of the public key. Deleting a key pair doesn't affect the private key on your computer or the public key on any instances already launched using that key pair. You can't launch a new instance using a deleted key pair, but you can continue to connect to any instances that you launched using a deleted key pair, as long as you still have the private key (.pem) file.
15. If you're using an Auto Scaling group (for example, in an Elastic Beanstalk environment), ensure that the key pair you're deleting is not specified in your launch configuration. Auto Scaling launches a replacement instance if it detects an unhealthy instance; however, the instance launch fails if the key pair cannot be found.
16. If you create a Linux AMI from an instance, and then use the AMI to launch a new instance in a different region or account, the new instance includes the public key from the original instance. This enables you to connect to the new instance using the same private key file as your original instance. You can remove this public key from your instance by removing its entry from the .ssh/authorized_keys file using a text editor of your choice. 
17. If you lose the private key for an EBS-backed instance, you can regain access to your instance. You must stop the instance, detach its root volume and attach it to another instance as a data volume, modify the authorized_keys file, move the volume back to the original instance, and restart the instance.
18. This procedure isn't supported for instance store-backed instances. To determine the root device type of your instance, open the Amazon EC2 console, choose Instances, select the instance, and check the value of Root device type in the details pane. The value is either ebs or instance store. If the root device is an instance store volume, you must have the private key in order to connect to the instance.
19. When you stop an instance, the data on any instance store volumes is erased. Therefore, if you have any data on instance store volumes that you want to keep, be sure to back it up to persistent storage.

Security Groups : 
1.  You can modify the rules for a security group at any time; the new rules are automatically applied to all instances that are associated with the security group. When we decide whether to allow traffic to reach an instance, we evaluate all the rules from all the security groups that are associated with the instance.
2. If you have requirements that aren't met by security groups, you can maintain your own firewall on any of your instances in addition to using security groups
3.  You can't specify a security group that you created for a VPC when you launch an instance in EC2-Classic.
4. After you launch an instance in EC2-Classic, you can't change its security groups. However, you can add rules to or remove rules from a security group, and those changes are automatically applied to all instances that are associated with the security group.
5. If you're using EC2-VPC, you must use security groups created specifically for your VPC. When you launch an instance in a VPC, you must specify a security group for that VPC. You can't specify a security group that you created for EC2-Classic when you launch an instance in a VPC.
6. After you launch an instance in a VPC, you can change its security groups. Security groups are associated with network interfaces. Changing an instance's security groups changes the security groups associated with the primary network interface (eth0).
7. You can also change the security groups associated with any other network interface. 
8. You can change the rules of a security group, and those changes are automatically applied to all instances that are associated with the security group.
9. In EC2-VPC, you can associate a network interface with up to 5 security groups and add up to 50 rules to a security group.
10. When you specify a security group for a nondefault VPC to the CLI or the API actions, you must use the security group ID and not the security group name to identify the security group.
11. Security groups for EC2-VPC have additional capabilities that aren't supported by security groups for EC2-Classic.
12. The rules of a security group control the inbound traffic that's allowed to reach the instances that are associated with the security group and the outbound traffic that's allowed to leave them.
13. By default, security groups allow all outbound traffic.
14. Security group rules are always permissive; you can't create rules that deny access.
15. You can add and remove rules at any time. You can't change the outbound rules for EC2-Classic. If you're using the Amazon EC2 console, you can modify existing rules, and you can copy the rules from an existing security group to a new security group.
16. Security groups are stateful — if you send a request from your instance, the response traffic for that request is allowed to flow in regardless of inbound security group rules. For VPC security groups, this also means that responses to allowed inbound traffic are allowed to flow out, regardless of outbound rules.
17. If your instance (host A) initiates traffic to host B and uses a protocol other than TCP, UDP, or ICMP, your instance’s firewall only tracks the IP address and protocol number for the purpose of allowing response traffic from host B. If host B initiates traffic to your instance in a separate request within 600 seconds of the original request or response, your instance accepts it regardless of inbound security group rules, because it’s regarded as response traffic. For VPC security groups, you can control this by modifying your security group’s outbound rules to permit only certain types of outbound traffic. Alternatively, you can use a network ACL for your subnet — network ACLs are stateless and therefore do not automatically allow response traffic. 
18. An individual IP address, in CIDR notation. Be sure to use the /32 prefix after the IP address; if you use the /0 prefix after the IP address, this opens the port to everyone. For example, specify the IP address 203.0.113.1 as 203.0.113.1/32.
19. The name (EC2-Classic) or ID (EC2-Classic or EC2-VPC) of a security group. This allows instances associated with the specified security group to access instances associated with this security group. (Note that this does not add rules from the source security group to this security group.) You can specify one of the following security groups:
The current security group.
EC2-Classic: A different security group for EC2-Classic in the same region.
EC2-Classic: A security group for another AWS account in the same region (add the AWS account ID as a prefix; for example, 111122223333/sg-edcd9784).
EC2-VPC: A different security group for the same VPC or a peer VPC.
20. When you specify a security group as the source or destination for a rule, the rule affects all instances associated with the security group. Incoming traffic is allowed based on the private IP addresses of the instances that are associated with the source security group (and not the public IP or Elastic IP addresses). 
21. If your security group rule references a security group in a peer VPC, and the referenced security group or VPC peering connection is deleted, the rule is marked as stale. 
22. If there is more than one rule for a specific port, we apply the most permissive rule. For example, if you have a rule that allows access to TCP port 22 (SSH) from IP address 203.0.113.1 and another rule that allows access to TCP port 22 from everyone, everyone has access to TCP port 22.
23. When you associate multiple security groups with an instance, the rules from each security group are effectively aggregated to create one set of rules. We use this set of rules to determine whether to allow access.
24. Because you can assign multiple security groups to an instance, an instance can have hundreds of rules that apply. This might cause problems when you access the instance. Therefore, we recommend that you condense your rules as much as possible.
25. Your security groups use connection tracking to track information about traffic to and from the instance. Rules are applied based on the connection state of the traffic to determine if the traffic is allowed or denied. 
26. This allows security groups to be stateful — responses to inbound traffic are allowed to flow out of the instance regardless of outbound security group rules, and vice versa. For example, if you initiate an ICMP ping command to your instance from your home computer, and your inbound security group rules allow ICMP traffic, information about the connection (including the port information) is tracked. Response traffic from the instance for the ping command is not tracked as new request, but rather as an established connection and is allowed to flow out of the instance, even if your outbound security group rules restrict outbound ICMP traffic.
27. Not all flows of traffic are tracked. If a security group rule permits TCP or UDP flows for all traffic (0.0.0.0/0) and there is a corresponding rule in the other direction that permits the response traffic, then that flow of traffic is not tracked. The response traffic is therefore allowed to flow based on the inbound or outbound rule that permits the response traffic, and not on tracking information.
28. An existing flow of traffic that is tracked may not be interrupted when you remove the security group rule that enables that flow. Instead, the flow is interrupted when it's stopped by you or the other host for at least a few minutes (or up to 5 days for established TCP connections). For UDP, this may require terminating actions on the remote side of the flow. An untracked flow of traffic is immediately interrupted if the rule that enables the flow is removed or modified. For example, if you remove a rule that allows all inbound SSH traffic (0.0.0.0/0) to the instance, then your existing SSH connections to the instance are immediately dropped.
29. If you want to ensure that traffic is immediately interrupted when you remove a security group rule, you can use a network ACL for your subnet — network ACLs are stateless and therefore do not automatically allow response traffic. 
30. Your AWS account automatically has a default security group per region for EC2-Classic. When you create a VPC, we automatically create a default security group for the VPC. If you don't specify a different security group when you launch an instance, the instance is automatically associated with the appropriate default security group.
31. A default security group is named default, and it has an ID assigned by AWS. The following are the initial settings for each default security group:
Allow inbound traffic only from other instances associated with the default security group
Allow all outbound traffic from the instance
32. The default security group specifies itself as a source security group in its inbound rules. This is what allows instances associated with the default security group to communicate with other instances associated with the default security group.
33. You can change the rules for a default security group. For example, you can add an inbound rule to allow SSH connections so that specific hosts can manage the instance.
34. You can't delete a default security group. 
35. After you launch an instance in EC2-Classic, you can't change its security groups. After you launch an instance in a VPC, you can change its security groups. 
36. When you add a rule to a security group, the new rule is automatically applied to any instances associated with the security group.
37. When you delete a rule from a security group, the change is automatically applied to any instances associated with the security group.
38. You can't delete a security group that is associated with an instance. You can't delete the default security group. You can't delete a security group that is referenced by a rule in another security group. If your security group is referenced by one of its own rules, you must delete the rule before you can delete the security group.

AWS Resource Access (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/UsingIAM.html)
1. You can use IAM to control how other users use resources in your AWS account, and you can use security groups to control access to your Amazon EC2 instances. 
2. IAM enables you to do the following:
- Create users and groups under your AWS account
- Assign unique security credentials to each user under your AWS account
- Control each user's permissions to perform tasks using AWS resources
- Allow the users in another AWS account to share your AWS resources
- Create roles for your AWS account and define the users or services that can assume them
- Use existing identities for your enterprise to grant permissions to perform tasks using AWS resources
3. By default, IAM users don't have permission to create or modify Amazon EC2 resources, or perform tasks using the Amazon EC2 API.
4. To allow IAM users to create or modify resources and perform tasks, you must create IAM policies that grant IAM users permission to use the specific resources and API actions they'll need, and then attach those policies to the IAM users or groups that require those permissions.
5. When you attach a policy to a user or group of users, it allows or denies the users permission to perform the specified tasks on the specified resources.
6. An IAM policy must grant or deny permission to use one or more Amazon EC2 actions. It must also specify the resources that can be used with the action, which can be all resources, or in some cases, specific resources. The policy can also include conditions that you apply to the resource.
7. Amazon EC2 partially supports resource-level permissions. This means that for some EC2 API actions, you cannot specify which resource a user is allowed to work with for that action; instead, you have to allow users to work with all resources for that action.
8. Each IAM policy statement applies to the resources that you specify using their ARNs.
9. An ARN has the following general syntax:
arn:aws:[service]:[region]:[account]:resourceType/resourcePath
10. In a policy statement, you can optionally specify conditions that control when it is in effect. Each condition contains one or more key-value pairs. Condition keys are not case sensitive. We've defined AWS-wide condition keys, plus additional service-specific condition keys.
If you specify multiple conditions, or multiple keys in a single condition, we evaluate them using a logical AND operation. If you specify a single condition with multiple values for one key, we evaluate the condition using a logical OR operation. For permission to be granted, all conditions must be met.
11. Amazon EC2 implements the AWS-wide condition keys , plus the following service-specific condition keys.
12. Many condition keys are specific to a resource, and some API actions use multiple resources. If you write a policy with a condition key, use the Resource element of the statement to specify the resource to which the condition key applies. If not, the policy may prevent users from performing the action at all, because the condition check fails for the resources to which the condition key does not apply. If you do not want to specify a resource, or if you've written the Action element of your policy to include multiple API actions, then you must use the ...IfExists condition type to ensure that the condition key is ignored for resources that do not use it.
13. It can take several minutes for policy changes to propagate before they take effect. Therefore, we recommend that you allow five minutes to pass before you test your policy updates.
14. You apply policy to user. 

IAM Roles :
1. Applications must sign their API requests with AWS credentials.
2. You need a strategy for managing credentials for your applications that run on EC2 instances. For example, you can securely distribute your AWS credentials to the instances, enabling the applications on those instances to use your credentials to sign requests, while protecting them from other users.
3. However, it's challenging to securely distribute credentials to each instance, especially those that AWS creates on your behalf, such as Spot instances or instances in Auto Scaling groups. You must also be able to update the credentials on each instance when you rotate your AWS credentials.
4. Use  IAM roles so that your applications can securely make API requests from your instances.without requiring you to manage the security credentials that the applications use
5. Instead of creating and distributing your AWS credentials, you can delegate permission to make API requests using IAM roles as follows:
Create an IAM role.
Define which accounts or AWS services can assume the role.
Define which API actions and resources the application can use after assuming the role.
Specify the role when you launch your instances.
Have the application retrieve a set of temporary credentials and use them.
6. For example, you can use IAM roles to grant permissions to applications running on your instances that needs to use a bucket in Amazon S3.
7. Amazon EC2 uses an instance profile as a container for an IAM role. When you create an IAM role using the console, the console creates an instance profile automatically and gives it the same name as the role it corresponds to. If you use the AWS CLI, API, or an AWS SDK to create a role, you create the role and instance profile as separate actions, and you might give them different names. To launch an instance with an IAM role, you specify the name of its instance profile. When you launch an instance using the Amazon EC2 console, you can select a role to associate with the instance; however, the list that's displayed is actually a list of instance profile names.
8. You can specify permissions for IAM roles by creating a policy in JSON format. These are similar to the policies that you create for IAM users. If you make a change to a role, the change is propagated to all instances, simplifying credential management.
9. You can't assign a role to an existing instance; you can only specify a role when you launch a new instance.
10. An application on the instance retrieves the security credentials provided by the role from the instance metadata item iam/security-credentials/role-name. The application is granted the permissions for the actions and resources that you've defined for the role through the security credentials associated with the role. These security credentials are temporary and we rotate them automatically. We make new credentials available at least five minutes prior to the expiration of the old credentials.
11. If you use services that use instance metadata with IAM roles, ensure that you don't expose your credentials when the services make HTTP calls on your behalf. The types of services that could expose your credentials include HTTP proxies, HTML/CSS validator services, and XML processors that support XML inclusion.
12. The following command retrieves the security credentials for an IAM role named s3access.
$ curl http://169.254.169.254/latest/meta-data/iam/security-credentials/s3access
13. For applications, AWS CLI, and Tools for Windows PowerShell commands that run on the instance, you do not have to explicitly get the temporary security credentials — the AWS SDKs, AWS CLI, and Tools for Windows PowerShell automatically get the credentials from the EC2 instance metadata service and use them. To make a call outside of the instance using temporary security credentials (for example, to test IAM policies), you must provide the access key, secret key, and the session token. 
14. To enable an IAM user to launch an instance with an IAM role, you must grant the user permission to pass the role to the instance.
15. You could grant IAM users access to all your roles by specifying the resource as "*" in this policy. However, consider whether users who launch instances with your roles (ones that exist or that you'll create later on) might be granted permissions that they don't need or shouldn't have.
16. After you create an IAM role, it may take several seconds for the permissions to propagate. If your first attempt to launch an instance with a role fails, wait a few seconds before trying again. 
17. After you launch an instance in EC2-Classic, you can't change its security groups. After you launch an instance in a VPC, you can change its security groups. 

VPC (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-vpc.html) : 
1. Amazon Virtual Private Cloud (Amazon VPC) enables you to define a virtual network in your own logically isolated area within the AWS cloud, known as a virtual private cloud (VPC). You can launch your AWS resources, such as instances, into your VPC. Your VPC closely resembles a traditional network that you might operate in your own data center, with the benefits of using AWS's scalable infrastructure. You can configure your VPC; you can select its IP address range, create subnets, and configure route tables, network gateways, and security settings. 
2. You can connect instances in your VPC to the Internet. You can connect your VPC to your own corporate data center, making the AWS cloud an extension of your data center. To protect the resources in each subnet, you can use multiple layers of security, including security groups and network access control lists.
3. If your accounts supports EC2-VPC only, we create a default VPC for you. A default VPC is a VPC that is already configured and ready for you to use. 
4. By launching your instances into a VPC instead of EC2-Classic, you gain the ability to:
Assign static private IP addresses to your instances that persist across starts and stops
Assign multiple IP addresses to your instances
Define network interfaces, and attach one or more network interfaces to your instances
Change security group membership for your instances while they're running
Control the outbound traffic from your instances (egress filtering) in addition to controlling the inbound traffic to them (ingress filtering)
Add an additional layer of access control to your instances in the form of network access control lists (ACL)
Run your instances on single-tenant hardware
5. EC2-Classic : We select a single private IP address for your instance; multiple IP addresses are not supported.
An Elastic IP is disassociated from your instance when you stop it.
You can assign an unlimited number of security groups to an instance when you launch it.
You can add rules for inbound traffic only.
Your instance runs on shared hardware
Your instance can access the Internet. Your instance automatically receives a public IP address, and can access the Internet directly through the AWS network edge.

Default VPC : 
You can assign multiple private IP addresses to your instance.
An Elastic IP remains associated with your instance when you stop it.
You can assign security groups to your instance when you launch it and while it's running.
You can add rules for inbound and outbound traffic.
You can run your instance on shared hardware or single-tenant hardware.
By default, your instance can access the Internet. Your instance receives a public IP address by default. An Internet gateway is attached to your default VPC, and your default subnet has a route to the Internet gateway.

6. Instances 1, 2, 3, and 4 are in the EC2-Classic platform. 1 and 2 were launched by one account, and 3 and 4 were launched by a different account. These instances can communicate with each other, can access the Internet directly.
7. Instances 5 and 6 are in different subnets in the same VPC in the EC2-VPC platform. They were launched by the account that owns the VPC; no other account can launch instances in this VPC. These instances can communicate with each other and can access instances in EC2-Classic and the Internet through the Internet gateway.
8. You can migrate an Elastic IP address from EC2-Classic to EC2-VPC. You can't migrate an Elastic IP address that was originally allocated for use in a VPC to EC2-Classic.
9. An EC2-Classic instance can communicate with instances in a VPC using public IP addresses, or you can use ClassicLink to enable communication over private IP addresses.
10. You can't migrate an instance from EC2-Classic to a VPC. However, you can migrate your application from an instance in EC2-Classic to an instance in a VPC
11. If you're using ClassicLink, you can register a linked EC2-Classic instance with a load balancer in a VPC, provided that the VPC has a subnet in the same Availability Zone as the instance.
12. You can't migrate a load balancer from EC2-Classic to a VPC. You can't register an instance in a VPC with a load balancer in EC2-Classic. 
13. You can change the network platform for your Reserved Instances from EC2-Classic to EC2-VPC. 
14. A linked EC2-Classic instance can use a VPC security groups through ClassicLink to control traffic to and from the VPC. VPC instances can't use EC2-Classic security groups.
15. You can't migrate a security group from EC2-Classic to a VPC. You can copy rules from a security group in EC2-Classic to a security group in a VPC. 
16. Spot instances can't be shared or moved between EC2-Classic and a VPC.
17. Create a nondefault VPC and launch your VPC-only instance into it by specifying a subnet ID or a network interface ID in the request. Note that you must create a nondefault VPC if you do not have a default VPC and you are using the AWS CLI, Amazon EC2 API, or AWS SDK to launch a VPC-only instance. 
18. Launch your VPC-only instance using the Amazon EC2 console. The Amazon EC2 console creates a nondefault VPC in your account and launches the instance into the subnet in the first Availability Zone. The console creates the VPC with the following attributes:
One subnet in each Availability Zone, with the public IP addressing attribute set to true so that instances receive a public IP address. 
An Internet gateway, and a main route table that routes traffic in the VPC to the Internet gateway. This enables the instances you launch in the VPC to communicate over the Internet. 
A default security group for the VPC and a default network ACL that is associated with each subnet. 

Multiple Private IP Addresses : http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/MultipleIP.html
1. In EC2-VPC, you can specify multiple private IP addresses for your instances. The number of network interfaces and private IP addresses that you can specify for an instance depends on the instance type. 
2. It can be useful to assign multiple private IP addresses to an instance in your VPC to do the following:
Host multiple websites on a single server by using multiple SSL certificates on a single server and associating each certificate with a specific IP address.
Operate network appliances, such as firewalls or load balancers, that have multiple private IP addresses for each network interface.
Redirect internal traffic to a standby instance in case your instance fails, by reassigning the secondary private IP address to the standby instance.
3. You can assign a secondary private IP address to any network interface. The network interface can be attached to or detached from the instance.
4. You can assign a secondary private IP address to any network interface. The network interface can be attached to or detached from the instance.
You must choose a secondary private IP address that's in the CIDR block range of the subnet for the network interface.
Security groups apply to network interfaces, not to IP addresses. Therefore, IP addresses are subject to the security group of the network interface in which they're specified.
Secondary private IP addresses can be assigned and unassigned to elastic network interfaces attached to running or stopped instances.
Secondary private IP addresses that are assigned to a network interface can be reassigned to another one if you explicitly allow it.
When assigning multiple secondary private IP addresses to a network interface using the command line tools or API, the entire operation fails if one of the secondary private IP addresses can't be assigned.
Primary private IP addresses, secondary private IP addresses,  and any associated Elastic IP addresses remain with the network interface when it is detached from an instance or attached to another instance.
Although you can't move the primary network interface from an instance, you can reassign the secondary private IP address of the primary network interface to another network interface.
You can move any additional network interface from one instance to another.
5. The following list explains how multiple IP addresses work with Elastic IP addresses:
Each private IP address can be associated with a single Elastic IP address, and vice versa.
When a secondary private IP address is reassigned to another interface, the secondary private IP address retains its association with an Elastic IP address.
When a secondary private IP address is unassigned from an interface, an associated Elastic IP address is automatically disassociated from the secondary private IP address.
6. When you add a second network interface, the system can no longer auto-assign a public IP address. You will not be able to connect to the instance unless you assign an Elastic IP address to the primary network interface (eth0). You can assign the Elastic IP address after you complete the Launch wizard. 
7. For each network interface, you can specify a primary private IP address, and one or more secondary private IP addresses.
8. After you have added a secondary private IP address to a network interface, you must connect to the instance and configure the secondary private IP address on the instance itself.
9. After you assign a secondary private IP address to your instance, you need to configure the operating system on your instance to recognize the secondary private IP address.
10. If you no longer require a secondary private IP address, you can unassign it from the instance or the network interface. When a secondary private IP address is unassigned from an elastic network interface, the Elastic IP address (if it exists) is also disassociated.

EIP : (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html)
1. An Elastic IP address is a static IP address designed for dynamic cloud computing
2. An Elastic IP address is associated with your AWS account. 
3. With an Elastic IP address, you can mask the failure of an instance or software by rapidly remapping the address to another instance in your account.
4. An Elastic IP address is a public IP address, which is reachable from the Internet. 
5. If your instance does not have a public IP address, you can associate an Elastic IP address with your instance to enable communication with the Internet; for example, to connect to your instance from your local computer.
6. To use an Elastic IP address, you first allocate one to your account, and then associate it with your instance or a network interface.
7. When you associate an Elastic IP address with an instance or its primary network interface, the instance's public IP address (if it had one) is released back into Amazon's pool of public IP addresses. You cannot reuse a public IP address.
8. You can disassociate an Elastic IP address from a resource, and reassociate it with a different resource.
9. A disassociated Elastic IP address remains allocated to your account until you explicitly release it.
10. To ensure efficient use of Elastic IP addresses, we impose a small hourly charge if an Elastic IP address is not associated with a running instance, or if it is associated with a stopped instance or an unattached network interface. While your instance is running, you are not charged for one Elastic IP address associated with the instance, but you are charged for any additional Elastic IP addresses associated with the instance
11. An Elastic IP address is for use in a specific region only.
12. When you associate an Elastic IP address with an instance that previously had a public IP address, the public DNS hostname of the instance changes to match the Elastic IP address.
13. We resolve a public DNS hostname to the public IP address or the Elastic IP address of the instance outside the network of the instance, and to the private IP address of the instance from within the network of the instance.
14.  You cannot migrate an Elastic IP address to another region. 
15. When you associate an Elastic IP address with an instance in EC2-Classic, a default VPC, or an instance in a nondefault VPC in which you assigned a public IP to the eth0 network interface during launch, the instance's current public IP address is released back into the public IP address pool. If you disassociate an Elastic IP address from the instance, the instance is automatically assigned a new public IP address within a few minutes. However, if you have attached a second network interface to an instance in a VPC, the instance is not automatically assigned a new public IP address.
16. You can allocate an Elastic IP address using the Amazon EC2 console or the command line.
17. (VPC only) If you're associating an Elastic IP address with your instance to enable communication with the Internet, you must also ensure that your instance is in a public subnet.
18. If you intend to send email to third parties from an instance, we suggest you provision one or more Elastic IP addresses and provide them to us. AWS works with ISPs and Internet anti-spam organizations to reduce the chance that your email sent from these addresses will be flagged as spam.
19. In addition, assigning a static reverse DNS record to your Elastic IP address used to send email can help avoid having email flagged as spam by some anti-spam organizations. Note that a corresponding forward DNS record (record type A) pointing to your Elastic IP address must exist before we can create your reverse DNS record.
20. If a reverse DNS record is associated with an Elastic IP address, the Elastic IP address is locked to your account and cannot be released from your account until the record is removed.
21. To remove email sending limits, or to provide us with your Elastic IP addresses and reverse DNS record
22. By default, all AWS accounts are limited to 5 Elastic IP addresses per region

ENI : (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html)
1. An elastic network interface (ENI) is a virtual network interface that you can attach to an instance in a VPC. ENIs are available only for instances running in a VPC.
2. An ENI can include the following attributes:
A primary private IP address.
One or more secondary private IP addresses.
One Elastic IP address per private IP address.
One public IP address, which can be auto-assigned to the elastic network interface for eth0 when you launch an instance. For more information, see Public IP Addresses for Network Interfaces.
One or more security groups.
A MAC address.
A source/destination check flag.
A description.
3. You can create an elastic network interface, attach it to an instance, detach it from an instance, and attach it to another instance. The attributes of an elastic network interface follow it as it's attached or detached from an instance and reattached to another instance. When you move an elastic network interface from one instance to another, network traffic is redirected to the new instance.
4. Each instance in a VPC has a default elastic network interface (the primary network interface, eth0) that is assigned a private IP address from the IP address range of your VPC. You cannot detach a primary network interface from an instance. You can create and attach additional elastic network interfaces. The maximum number of elastic network interfaces that you can use varies by instance type.
5. The public IP address is assigned from Amazon's pool of public IP addresses, and is assigned to the network interface with the device index of eth0 (the primary network interface).
6. When you create a network interface, it inherits the public IP addressing attribute from the subnet. If you subsequently modify the public IP addressing attribute of the subnet, the network interface keeps the setting that was in effect when it was created. If you launch an instance and specify an existing network interface for eth0, the public IP addressing attribute is determined by the network interface.
7. You can create a management network using elastic network interfaces. In this scenario, the secondary elastic network interface on the instance handles public-facing traffic and the primary elastic network interface handles back-end management traffic and is connected to a separate subnet in your VPC that has more restrictive access controls. \
8. The public facing interface, which may or may not be behind a load balancer, has an associated security group that allows access to the server from the Internet (for example, allow TCP port 80 and 443 from 0.0.0.0/0, or from the load balancer) while the private facing interface has an associated security group allowing SSH access only from an allowed range of IP addresses either within the VPC or from the Internet, a private subnet within the VPC or a virtual private gateway.
9. To ensure failover capabilities, consider using a secondary private IP for incoming traffic on an elastic network interface. In the event of an instance failure, you can move the interface and/or secondary private IP address to a standby instance.
10. You can attach an elastic network interface to an instance when it's running (hot attach), when it's stopped (warm attach), or when the instance is being launched (cold attach).
11. You can detach secondary (ethN) elastic network interfaces when the instance is running or stopped. However, you can't detach the primary (eth0) interface.
12. You can attach an elastic network interface in one subnet to an instance in another subnet in the same VPC; however, both the elastic network interface and the instance must reside in the same Availability Zone.
13. When launching an instance from the CLI or API, you can specify the elastic network interfaces to attach to the instance for both the primary (eth0) and additional elastic network interfaces.
14. Launching an Amazon Linux or Windows Server instance with multiple network interfaces automatically configures interfaces, private IP addresses, and route tables on the operating system of the instance.
15. Attaching another elastic network interface to an instance is not a method to increase or double the network bandwidth to or from the dual-homed instance.
16. If you attach two or more network interfaces from the same subnet to an instance, you may encounter networking issues such as asymmetric routing. If possible, use a secondary private IP address on the primary network interface instead
17. You must first detach an elastic network interface from an instance before you can delete it. Deleting an elastic network interface releases all attributes associated with the interface and releases any private IP addresses or Elastic IP addresses to be used by another instance.
18. If the public IP address on your instance is released, it will not receive a new one if there is more than one elastic network interface attached to the instance.
19. You can change the security groups that are associated with an elastic network interface. When you create the security group, be sure to specify the same VPC as the subnet for the interface.
20. If you have an Elastic IP address, you can associate it with one of the private IP addresses for the elastic network interface. You can associate one Elastic IP address with each private IP address.
21. If the elastic network interface has an Elastic IP address associated with it, you can disassociate the address, and then either associate it with another elastic network interface or release it back to the address pool. Note that this is the only way to associate an Elastic IP address with an instance in a different subnet or VPC using an elastic network interface, as elastic network interfaces are specific to a particular subnet.
22. You can set the termination behavior for an elastic network interface attached to an instance so that it is automatically deleted when you delete the instance to which it's attached.
23. By default, elastic network interfaces that are automatically created and attached to instances using the console are set to terminate when the instance terminates. However, network interfaces created using the command line interface aren't set to terminate when the instance terminates.
24. Tags are metadata that you can add to an elastic network interface. Tags are private and are only visible to your account. Each tag consists of a key and an optional value.

Placement Groups : 
1. A placement group is a logical grouping of instances within a single Availability Zone. 
2. Placement groups are recommended for applications that benefit from low network latency, high network throughput, or both.
3.  To provide the lowest latency, and the highest packet-per-second network performance for your placement group, choose an instance type that supports enhanced networking.
4. First, you create a placement group and then you launch multiple instances into the placement group. We recommend that you launch the number of instances that you need in the placement group in a single launch request and that you use the same instance type for all instances in the placement group. If you try to add more instances to the placement group later, or if you try to launch more than one instance type in the placement group, you increase your chances of getting an insufficient capacity error.
5. If you stop an instance in a placement group and then start it again, it still runs in the placement group. However, the start fails if there isn't enough capacity for the instance.
6. If you receive a capacity error when launching an instance in a placement group, stop and restart the instances in the placement group, and then try the launch again.
7. A placement group can't span multiple Availability Zones.
8. The name you specify for a placement group must be unique within your AWS account.
9. The maximum network throughput speed of traffic between two instances in a placement group is limited by the slower of the two instances.
10. For applications with high-throughput requirements, choose an instance type with 10 Gbps or 20 Gbps network connectivity. 
11. Although launching multiple instance types into a placement group is possible, this reduces the likelihood that the required capacity will be available for your launch to succeed. We recommend using the same instance type for all instances in a placement group.
12. You can't merge placement groups. Instead, you must terminate the instances in one placement group, and then relaunch those instances into the other placement group.
13. You can't move an existing instance into a placement group. You can create an AMI from your existing instance, and then launch a new instance from the AMI into a placement group.
14. Reserved Instances provide a capacity reservation for EC2 instances in an Availability Zone. The capacity reservation can be used by instances in a placement group that are assigned to the same Availability Zone. However, it is not possible to explicitly reserve capacity for a placement group.
15. To ensure that network traffic remains within the placement group, members of the placement group must address each other via their private IP addresses. If members address each other using their public IP addresses, throughput drops to 5 Gbps or less.
16. Network traffic to and from resources outside the placement group is limited to 5 Gbps.
17. You can delete a placement group if you need to replace it or no longer need a placement group. Before you can delete your placement group, you must terminate all instances that you launched into the placement group.

MTU : http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_mtu.html
1. The maximum transmission unit (MTU) of a network connection is the size, in bytes, of the largest permissible packet that can be passed over the connection.
2. The larger the MTU of a connection, the more data that can be passed in a single packet. Ethernet packets consist of the frame, or the actual data you are sending, and the network overhead information that surrounds it.
3. Ethernet frames can come in different formats, and the most common format is the standard Ethernet v2 frame format. It supports 1500 MTU, which is the largest Ethernet packet size supported over most of the Internet. The maximum supported MTU for an instance depends on its instance type. All Amazon EC2 instance types support 1500 MTU, and many current instance sizes support 9001 MTU, or jumbo frames.
4. Modifying your instance's security group to allow path MTU discovery does not guarantee that jumbo frames will not be dropped by some routers. An Internet gateway in your VPC will forward packets up to 1500 bytes only. 1500 MTU packets are recommended for Internet traffic.
5. Jumbo frames allow more than 1500 bytes of data by increasing the payload size per packet, and thus increasing the percentage of the packet that is not packet overhead. Fewer packets are needed to send the same amount of usable data. However, outside of a given AWS region (EC2-Classic), a single VPC, or a VPC peering connection, you will experience a maximum path of 1500 MTU. VPN connections and traffic sent over an Internet gateway are limited to 1500 MTU. If packets are over 1500 bytes, they are fragmented, or they are dropped if the Don't Fragment flag is set in the IP header.
6. Enhanced networking uses single root I/O virtualization (SR-IOV) to provide high-performance networking capabilities on supported instance types. SR-IOV is a method of device virtualization that provides higher I/O performance and lower CPU utilization when compared to traditional virtualized network interfaces. Enhanced networking provides higher bandwidth, higher packet per second (PPS) performance, and consistently lower inter-instance latencies.

EBS Storage : (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Storage.html)

1. Amazon EBS provides durable, block-level storage volumes that you can attach to a running instance You can use Amazon EBS as a primary storage device for data that requires frequent and granular updates. For example, Amazon EBS is the recommended storage option when you run a database on an instance.
2. An EBS volume behaves like a raw, unformatted, external block device that you can attach to a single instance. The volume persists independently from the running life of an instance. After an EBS volume is attached to an instance, you can use it like any other physical hard drive.
3. multiple volumes can be attached to an instance. 
4. You can also detach an EBS volume from one instance and attach it to another instance. EBS volumes can also be created as encrypted volumes using the Amazon EBS encryption feature.
5. To keep a backup copy of your data, you can create a snapshot of an EBS volume, which is stored in Amazon S3. You can create an EBS volume from a snapshot, and attach it to another instance.
4. Amazon Elastic Block Store (Amazon EBS) provides block level storage volumes for use with EC2 instances. EBS volumes are highly available and reliable storage volumes that can be attached to any running instance that is in the same Availability Zone. EBS volumes that are attached to an EC2 instance are exposed as storage volumes that persist independently from the life of the instance. With Amazon EBS, you pay only for what you use. 
5. Amazon EBS is recommended when data must be quickly accessible and requires long-term persistence. EBS volumes are particularly well-suited for use as the primary storage for file systems, databases, or for any applications that require fine granular updates and access to raw, unformatted, block-level storage. 
6.  Amazon EBS is well suited to both database-style applications that rely on random reads and writes, and to throughput-intensive applications that perform long, continuous reads and writes.
7. For simplified data encryption, you can launch your EBS volumes as encrypted volumes. Amazon EBS encryption offers you a simple encryption solution for your EBS volumes without the need for you to build, manage, and secure your own key management infrastructure. When you create an encrypted EBS volume and attach it to a supported instance type, data stored at rest on the volume, disk I/O, and snapshots created from the volume are all encrypted. The encryption occurs on the servers that hosts EC2 instances, providing encryption of data-in-transit from EC2 instances to EBS storage.
8. Amazon EBS encryption uses AWS Key Management Service (AWS KMS) master keys when creating encrypted volumes and any snapshots created from your encrypted volumes. 
9. The first time you create an encrypted EBS volume in a region, a default master key is created for you automatically. This key is used for Amazon EBS encryption unless you select a Customer Master Key (CMK) that you created separately using the AWS Key Management Service. Creating your own CMK gives you more flexibility, including the ability to create, rotate, disable, define access controls, and audit the encryption keys used to protect your data. 
10. You can attach multiple volumes to the same instance within the limits specified by your AWS account. Your account has a limit on the number of EBS volumes that you can use, and the total storage available to you. 
11. You can create EBS General Purpose SSD (gp2), Provisioned IOPS SSD (io1), Throughput Optimized HDD (st1), and Cold HDD (sc1) volumes up to 16 TiB in size. You can mount these volumes as devices on your Amazon EC2 instances. You can mount multiple volumes on the same instance, but each volume can be attached to only one instance at a time. 
12. With General Purpose SSD (gp2) volumes, you can expect base performance of 3 IOPS/GiB, with the ability to burst to 3,000 IOPS for extended periods of time. Gp2 volumes are ideal for a broad range of use cases such as boot volumes, small and medium-size databases, and development and test environments. Gp2 volumes support up to 10,000 IOPS and 160 MB/s of throughput. 
13. With Provisioned IOPS SSD (io1) volumes, you can provision a specific level of I/O performance. Io1 volumes support up to 20,000 IOPS and 320 MB/s of throughput. This allows you to predictably scale to tens of thousands of IOPS per EC2 instance.
14. Throughput Optimized HDD (st1) volumes provide low-cost magnetic storage that defines performance in terms of throughput rather than IOPS. With throughput of up to 500 MiB/s, this volume type is a good fit for large, sequential workloads such as Amazon EMR, ETL, data warehouses, and log processing. 
15. Cold HDD (sc1) volumes provide low-cost magnetic storage that defines performance in terms of throughput rather than IOPS. With throughput of up to 250 MiB/s, sc1 is a good fit ideal for large, sequential, cold-data workloads. If you require infrequent access to your data and are looking to save costs, sc1 provides inexpensive block storage. 
16. You can create point-in-time snapshots of EBS volumes, which are persisted to Amazon S3. Snapshots protect data for long-term durability, and they can be used as the starting point for new EBS volumes. The same snapshot can be used to instantiate as many volumes as you wish. These snapshots can be copied across AWS regions.
17. EBS volumes are created in a specific Availability Zone, and can then be attached to any instances in that same Availability Zone. To make a volume available outside of the Availability Zone, you can create a snapshot and restore that snapshot to a new volume anywhere in that region. You can copy snapshots to other regions and then restore them to new volumes there, making it easier to leverage multiple AWS regions for geographical expansion, data center migration, and disaster recovery.
18. Performance metrics, such as bandwidth, throughput, latency, and average queue length, are available through the AWS Management Console. These metrics, provided by Amazon CloudWatch, allow you to monitor the performance of your volumes to make sure that you are providing enough performance for your applications without paying for resources you don't need.
19. An Amazon EBS volume is a durable, block-level storage device that you can attach to a single EC2 instance. You can use EBS volumes as primary storage for data that requires frequent updates, such as the system drive for an instance or storage for a database application, or for throughput-intensive applications that perform continuous disk scans
20. EBS volumes provide several benefits that are not supported by instance store volumes.
21. When you create an EBS volume in an Availability Zone, it is automatically replicated within that zone to prevent data loss due to failure of any single hardware component. 
22. the instance can format the EBS volume with a file system, such as ext3, and then install applications.
23. An EBS volume can be attached to only one instance at a time within the same Availability Zone. However, multiple volumes can be attached to a single instance. If you attach multiple volumes to a device that you have named, you can stripe data across the volumes for increased I/O and throughput performance.
24. By default, EBS volumes that are attached to a running instance automatically detach from the instance with their data intact when that instance is terminated.
25. If you are using an EBS-backed instance, you can stop and restart that instance without affecting the data stored in the attached volume. The volume remains attached throughout the stop-start cycle. This enables you to process and store the data on your volume indefinitely, only using the processing and storage resources when required.
26. he data persists on the volume until the volume is deleted explicitly. The physical block storage used by deleted EBS volumes is overwritten with zeroes before it is allocated to another account. 
27. By default, EBS volumes that are created and attached to an instance at launch are deleted when that instance is terminated. You can modify this behavior by changing the value of the flag DeleteOnTermination to false when you launch the instance. This modified value causes the volume to persist even after the instance is terminated, and enables you to attach the volume to another instance.
28. Amazon EBS encryption uses 256-bit Advanced Encryption Standard algorithms (AES-256) and an Amazon-managed key infrastructure. The encryption occurs on the server that hosts the EC2 instance, providing encryption of data-in-transit from the EC2 instance to Amazon EBS storage.
29. Amazon EBS provides the ability to create snapshots (backups) of any EBS volume and write a copy of the data in the volume to Amazon S3, where it is stored redundantly in multiple Availability Zones. 
30. The volume does not need be attached to a running instance in order to take a snapshot. As you continue to write data to a volume, you can periodically create a snapshot of the volume to use as a baseline for new volumes. These snapshots can be used to create multiple new EBS volumes, expand the size of a volume, or move volumes across Availability Zones. Snapshots of encrypted EBS volumes are automatically encrypted.
31.  The snapshots can be shared with specific AWS accounts or made public. When you create snapshots, you incur charges in Amazon S3 based on the volume's total size. For a successive snapshot of the volume, you are only charged for any additional data beyond the volume's original size.
Snapshots are incremental backups, meaning that only the blocks on the volume that have changed after your most recent snapshot are saved.
32. SSD-backed volumes optimized for transactional workloads involving frequent read/write operations with small I/O size, where the dominant performance attribute is IOPS
HDD-backed volumes optimized for large streaming workloads where throughput (measured in MiB/s) is a better performance measure than IOPS
33. General Purpose SSD (gp2) : Recommended for most workloads : General purpose SSD volume that balances price and performance for a wide variety of transactional workloads
System boot volumes
Virtual desktops
Low-latency interactive apps
Development and test environments
34. Provisioned IOPS SSD (io1) : Highest-performance SSD volume designed for mission-critical applications
Critical business applications that require sustained IOPS performance, or more than 10,000 IOPS or 160 MiB/s of throughput per volume
Large database workloads, such as:
MongoDB,Cassandra,Microsoft SQL Server,MySQL,PostgreSQL,Oracle
35. There are several factors that can affect the performance of EBS volumes, such as instance configuration, I/O characteristics, and workload demand.
36. SSD GP2 : Each volume receives an initial I/O credit balance of 5.4 million I/O credits, which is enough to sustain the maximum burst performance of 3,000 IOPS for 30 minutes. This initial credit balance is designed to provide a fast initial boot cycle for boot volumes and to provide a good bootstrapping experience for other applications. Volumes earn I/O credits at the baseline performance rate of 3 IOPS per GiB of volume size.
37. The burst duration of a volume is dependent on the size of the volume, the burst IOPS required, and the credit balance when the burst begins.
                           (Credit balance)
Burst duration  =  ------------------------------------
                   (Burst IOPS) - 3(Volume size in GiB)
38. If you are creating a volume for a high-performance storage scenario, you should make sure to use a Provisioned IOPS SSD (io1) volume and attach it to an instance with enough bandwidth to support your application, such as an EBS-optimized instance or an instance with 10 Gigabit network connectivity. The same advice holds for Throughput Optimized HDD (st1) and Cold HDD (sc1) volumes
39. New EBS volumes receive their maximum performance the moment that they are available and do not require initialization (formerly known as pre-warming). However, storage blocks on volumes that were restored from snapshots must be initialized (pulled down from Amazon S3 and written to the volume) before you can access the block. This preliminary action takes time and can cause a significant increase in the latency of an I/O operation the first time each block is accessed. For most applications, amortizing this cost over the lifetime of the volume is acceptable. Performance is restored after the data is accessed once. 
40. If you are unable to create an io1 volume (or launch an instance with an io1 volume in its block device mapping) in one of these regions, try a different Availability Zone in the region. You can verify that an Availability Zone supports io1 volumes by creating a 4 GiB io1 volume in that zone.
41. New volumes created from existing EBS snapshots load lazily in the background. This means that after a volume is created from a snapshot, there is no need to wait for all of the data to transfer from Amazon S3 to your EBS volume before your attached instance can start accessing the volume and all its data. If your instance accesses data that hasn't yet been loaded, the volume immediately downloads the requested data from Amazon S3, and continues loading the rest of the data in the background.
42. Because of security constraints, you cannot directly restore an EBS volume from a shared encrypted snapshot that you do not own. You must first create a copy of the snapshot, which you will own. You can then restore a volume from that copy. 
43. For most applications, amortizing the initialization cost over the lifetime of the volume is acceptable. If you need to ensure that your restored volume always functions at peak capacity in production, you can force the immediate initialization of the entire volume using dd or fio.     
44. If a volume is encrypted, it can only be attached to an instance that supports Amazon EBS encryption. 
45. Amazon Web Services (AWS) automatically provides data, such as Amazon CloudWatch metrics and volume status checks, that you can use to monitor your Amazon Elastic Block Store (Amazon EBS) volumes.
46. Volume status checks are automated tests that run every 5 minutes and return a pass or fail status. If all checks pass, the status of the volume is ok. If a check fails, the status of the volume is impaired. If the status is insufficient-data, the checks may still be in progress on the volume. 
47. Volume status is based on the volume status checks, and does not reflect the volume state. Therefore, volume status does not indicate volumes in the error state (for example, when a volume is incapable of accepting I/O.)
48. If the consistency of a particular volume is not a concern for you, and you'd prefer that the volume be made available immediately if it's impaired, you can override the default behavior by configuring the volume to automatically enable I/O. If you enable the AutoEnableIO volume attribute, the volume status check continues to pass. In addition, you'll see an event that lets you know that the volume was determined to be potentially inconsistent, but that its I/O was automatically enabled. This enables you to check the volume's consistency or replace it at a later time.
49. While initializing io1 volumes that were restored from snapshots, the performance of the volume may drop below 50 percent of its expected level, which causes the volume to display a warning state in the I/O Performance status check. This is expected, and you can ignore the warning state on io1 volumes while you are initializing them.
50. You can detach an Amazon EBS volume from an instance explicitly or by terminating the instance. However, if the instance is running, you must first unmount the volume from the instance.;
If an EBS volume is the root device of an instance, you must stop the instance before you can detach the volume.
When a volume with an AWS Marketplace product code is detached from an instance, the product code is no longer associated with the instance.     
51. Note that you can reattach a volume that you detached (without unmounting it), but it might not get the same mount point and the data on the volume might be out of sync if there were writes to the volume in progress when it was detached.
52. To guard against the possibility of data loss, take a snapshot of your volume before attempting to unmount it. Forced detachment of a stuck volume can cause damage to the file system or the data it contains or an inability to attach a new volume using the same device name, unless you reboot the instance.
53. If your volume stays in the detachingstate, you can force the detachment by choosing Force Detach. Use this option only as a last resort to detach a volume from a failed instance, or if you are detaching a volume with the intention of deleting it. The instance doesn't get an opportunity to flush file system caches or file system metadata. If you use this option, you must perform file system check and repair procedures.
54. You can increase the storage space of an existing EBS volume without losing the data on the volume. To do this, you migrate your data to a larger volume and then extend the file system on the volume to recognize the newly-available space. After you verify that your new volume is working properly, you can delete the old volume.
55. If your storage needs demand a larger EBS volume than AWS provides, you may want to use RAID 0 to "stripe" a single logical volume across multiple physical volumes.
56. You must stop your instance to expand the storage space.
57. Some Amazon EC2 root volumes and volumes that are restored from snapshots contain a partition that actually holds the file system and the data. If you think of a volume as a container, a partition is another container inside the volume, and the data resides on the partition. Growing the volume size does not grow the partition; to take advantage of a larger volume, the partition must be expanded to the new size.
58. If the partition you want to expand is not the root partition, then you can simply unmount it and resize the partition from the instance itself. If the partition you need to resize is the root partition for an instance, the process becomes more complicated because you cannot unmount the root partition of a running instance. 

EBS Snapshots :
1. You can back up the data on your EBS volumes to Amazon S3 by taking point-in-time snapshots. Snapshots are incremental backups, which means that only the blocks on the device that have changed after your most recent snapshot are saved. This minimizes the time required to create the snapshot and saves on storage costs. When you delete a snapshot, only the data unique to that snapshot is removed. 
2. EBS snapshots broadly support EBS encryption:
Snapshots of encrypted volumes are automatically encrypted.
Volumes that are created from encrypted snapshots are automatically encrypted.
When you copy an unencrypted snapshot that you own, you can encrypt it during the copy process.
When you copy an encrypted snapshot that you own, you can reencrypt it with a different key during the copy process
3. Snapshots are constrained to the region in which they are created. After you have created a snapshot of an EBS volume, you can use it to create new volumes in the same region.
4. You can also copy snapshots across regions, making it easier to leverage multiple regions for geographical expansion, data center migration, and disaster recovery.
5. Snapshots occur asynchronously; the point-in-time snapshot is created immediately, but the status of the snapshot is pending until the snapshot is complete (when all of the modified blocks have been transferred to Amazon S3), which can take several hours for large initial snapshots or subsequent snapshots where many blocks have changed. While it is completing, an in-progress snapshot is not affected by ongoing reads and writes to the volume.
6. Although you can take a snapshot of a volume while a previous snapshot of that volume is in the pending status, having multiple pending snapshots of a volume may result in reduced volume performance until the snapshots complete.
There is a limit of 5 pending snapshots for a single gp2, io1, or Magnetic volume, and 1 pending snapshot for a single st1 or sc1 volume. If you receive a ConcurrentSnapshotLimitExceeded error while trying to create multiple concurrent snapshots of the same volume, wait for one or more of the pending snapshots to complete before creating another snapshot of that volume.
7. You can share an encrypted snapshot only with specific AWS accounts. For others to use your shared, encrypted snapshot, you must also share the CMK key that was used to encrypt it. Users with access to your encrypted snapshot must create their own personal copy of it and then use that copy to restore the volume. Your copy of a shared, encrypted snapshot can also be re-encrypted with a different key. 
8. When a snapshot is created from a volume with an AWS Marketplace product code, the product code is propagated to the snapshot.
9. You can take a snapshot of an attached volume that is in use. However, snapshots only capture data that has been written to your Amazon EBS volume at the time the snapshot command is issued. This might exclude any data that has been cached by any applications or the operating system. If you can pause any file writes to the volume long enough to take a snapshot, your snapshot should be complete. However, if you can't pause all file writes to the volume, you should unmount the volume from within the instance, issue the snapshot command, and then remount the volume to ensure a consistent and complete snapshot. You can remount and use your volume while the snapshot status is pending.
10. To create a snapshot for Amazon EBS volumes that serve as root devices, you should stop the instance before taking the snapshot.
11. When you delete a snapshot, only the data exclusive to that snapshot is removed. Deleting previous snapshots of a volume does not affect your ability to restore volumes from later snapshots of that volume.
12. If you make periodic snapshots of a volume, the snapshots are incremental so that only the blocks on the device that have changed since your last snapshot are saved in the new snapshot. Even though snapshots are saved incrementally, the snapshot deletion process is designed so that you need to retain only the most recent snapshot in order to restore the volume.
13. Note that you can't delete a snapshot of the root device of an EBS volume used by a registered AMI. You must first deregister the AMI before you can delete the snapshot.
14. With Amazon EBS, you can create point-in-time snapshots of volumes which we store for you in Amazon Simple Storage Service (Amazon S3). After you've created a snapshot and it has finished copying to Amazon S3 (when the snapshot status is completed), you can copy it from one AWS region to another, or within the same region. Amazon S3 server-side encryption (256-bit AES) protects a snapshot's data-in-transit during copying. The snapshot copy receives a snapshot ID different from the original snapshot's ID.
15. Disaster recovery: Back up your data and logs across different geographical locations at regular intervals. In case of disaster, you can restore your applications using point-in-time backups stored in the secondary region. This minimizes data loss and recovery time.
16. Data retention and auditing requirements: Copy your encrypted EBS snapshots from one AWS account to another to preserve data logs or other files for auditing or data retention. Using a different account helps prevent accidental snapshot deletions, and protects you if your main AWS account is compromised.
17. You can also copy AWS Marketplace, VM Import/Export, and AWS Storage Gateway snapshots, but you must verify that the snapshot is supported in the destination region.
18. if you copy an unencrypted snapshot from the US East (N. Virginia) region to the US West (Oregon) region, the first snapshot copy of the volume is a full copy and subsequent snapshot copies of the same volume transferred between the same regions are incremental.
19. Snapshot copies within a single region do not copy any data at all as long as the following conditions apply:
The encryption status of the snapshot copy does not change during the copy operation
For encrypted snapshots, both the source snapshot and the copy are encrypted with the default EBS CMK
20. When copying an encrypted snapshot that was shared with you, you should consider re-encrypting the snapshot during the copy process with a different key that you control. This protects you if the original key is compromised, or if the owner revokes the key for any reason, which could cause you to lose access to the volume you created.
21. You can share your unencrypted snapshots with your co-workers or others in the AWS community by modifying the permissions of the snapshot. Users that you have authorized can quickly use your unencrypted shared snapshots as the basis for creating their own EBS volumes. If you choose, you can also make your unencrypted snapshots available publicly to all AWS users.
22. You can share an encrypted snapshot with specific AWS accounts, though you cannot make it public. For others to use the snapshot, you must also share the custom CMK key used to encrypt it. Cross-account permissions may be applied to a custom key either when it is created or at a later time. Users with access can copy your snapshot and create their own EBS volumes based on your snapshot while your original snapshot remains unaffected.
23. If your snapshot uses the longer resource ID format, you can only share it with another account that also supports longer IDs.
24. AWS prevents you from sharing snapshots that were encrypted with your default CMK. Snapshots that you intend to share must instead be encrypted with a custom CMK.
25. If your snapshot is encrypted, you must ensure that the following are true:
The snapshot is encrypted with a custom CMK, not your default CMK. If you attempt to change the permissions of a snapshot encrypted with your default CMK, the console will display an error message.
You are sharing the custom CMK with the accounts that have access to your snapshot.

EBS Optimization : 
1. An Amazon EBS–optimized instance uses an optimized configuration stack and provides additional, dedicated capacity for Amazon EBS I/O. This optimization provides the best performance for your EBS volumes by minimizing contention between Amazon EBS I/O and other traffic from your instance.
2. EBS–optimized instances deliver dedicated bandwidth to Amazon EBS, with options between 500 Mbps and 10,000 Mbps, depending on the instance type you use.
3. When attached to an EBS–optimized instance, General Purpose SSD (gp2) volumes are designed to deliver within 10% of their baseline and burst performance 99% of the time in a given year, and Provisioned IOPS SSD (io1) volumes are designed to deliver within 10% of their provisioned performance 99.9% of the time in a given year
4. Both Throughput Optimized HDD (st1) and Cold HDD (sc1) guarantee performance consistency of 90% of burst throughput 99% of the time in a given year. 
5. When you enable EBS optimization for an instance that is not EBS–optimized by default, you pay an additional low, hourly fee for the dedicated capacity. 
6.Choose an EBS–optimized instance that provides more dedicated EBS throughput than your application needs; otherwise, the connection between Amazon EBS and Amazon EC2 can become a performance bottleneck.
7. You can enable EBS optimization for the other instance types that support EBS optimization when you launch the instances, or enable EBS optimization after the instances are running.
8. Note that some instances with 10-gigabit network interfaces, such as i2.8xlarge and r3.8xlarge do not offer EBS-optimization, and therefore do not have dedicated EBS bandwidth available 
9. On these instances, network traffic and Amazon EBS traffic is shared on the same 10-gigabit network interface. Some other 10-gigabit network instances, such as c4.8xlarge and d2.8xlarge offer dedicated EBS bandwidth in addition to a 10-gigabit interface which is used exclusively for network traffic.

EBS Encryption : 
1. Amazon EBS encryption offers you a simple encryption solution for your EBS volumes without the need for you to build, maintain, and secure your own key management infrastructure. When you create an encrypted EBS volume and attach it to a supported instance type, the following types of data are encrypted:
Data at rest inside the volume
All data moving between the volume and the instance
All snapshots created from the volume
2. The encryption occurs on the servers that host EC2 instances, providing encryption of data-in-transit from EC2 instances to EBS storage.
3. This feature is supported with all EBS volume types (General Purpose SSD [gp2], Provisioned IOPS SSD [io1], Throughput Optimized HDD [st1], Cold HDD [sc1], and Magnetic [standard]), and you can expect the same IOPS performance on encrypted volumes as you would with unencrypted volumes, with a minimal effect on latency.
4. You can access encrypted volumes the same way that you access unencrypted volumes; encryption and decryption are handled transparently and they require no additional action from you, your EC2 instance, or your application.
5. You cannot change the CMK that is associated with an existing snapshot or encrypted volume. However, you can associate a different CMK during a snapshot copy operation (including encrypting a copy of an unencrypted snapshot) and the resulting copied snapshot will use the new CMK.
6. Each AWS account has a unique master key that is stored completely separate from your data, on a system that is surrounded with strong physical and logical security controls. Each encrypted volume (and its subsequent snapshots) is encrypted with a unique volume encryption key that is then encrypted with a region-specific secure master key. The volume encryption keys are used in memory on the server that hosts your EC2 instance; they are never stored on disk in plain text.
7. There is no direct way to encrypt an existing unencrypted volume, or to remove encryption from an encrypted volume. However, you can migrate data between encrypted and unencrypted volumes. You can also apply a new encryption status while copying a snapshot:
While copying an unencrypted snapshot of an unencrypted volume, you can encrypt the copy. Volumes restored from this encrypted copy will also be encrypted.
While copying an encrypted snapshot of an encrypted volume, you can re-encrypt the copy using a different CMK. Volumes restored from the encrypted copy will only be accessible using the newly applied CMK.
8. When you have access to both an encrypted and unencrypted volume, you can freely transfer data between them. EC2 carries out the encryption or decryption operations transparently.
9. The ability to encrypt a snapshot during copying also allows you to re-encrypt an already-encrypted snapshot that you own. In this operation, the plaintext of your snapshot will be encrypted using a new CMK that you provide. Volumes restored from the resulting copy will only be accessible using the new CMK.

EBS Performance : 
1. On instances without support for EBS-optimized throughput, network traffic can contend with traffic between your instance and your EBS volumes; on EBS-optimized instances, the two types of traffic are kept separate. Some EBS-optimized instance configurations incur an extra cost (such as C3, R3, and M3), while others are always EBS-optimized at no extra cost (such as M4, C4, and D2). 
2. There is a relationship between the maximum performance of your EBS volumes, the size and number of I/O operations, and the time it takes for each action to complete. Each of these factors (performance, I/O, and latency) affects the others, and different applications are more sensitive to one factor or another. 
3. Some instance types can drive more I/O throughput than what you can provision for a single EBS volume. You can join multiple gp2, io1, st1, or sc1 volumes together in a RAID 0 configuration to use the available bandwidth for these instances. 

EC2 Configuration
1.  In order to get the most performance out of your EBS volumes, you should attach them to an instance with enough bandwidth to support your volumes, such as an EBS-optimized instance or an instance with 10 Gigabit network connectivity. This is especially important when you stripe multiple volumes together in a RAID configuration.
2. Any performance-sensitive workloads that require minimal variability and dedicated Amazon EC2 to Amazon EBS traffic, such as production databases or business applications, should use volumes that are attached to an EBS-optimized instance or an instance with 10 Gigabit network connectivity. EC2 instances that do not meet this criteria offer no guarantee of network resources. The only way to ensure sustained reliable network bandwidth between your EC2 instance and your EBS volumes is to launch the EC2 instance as EBS-optimized or choose an instance type with 10 Gigabit network connectivity. 
3. Launching an instance that is EBS-optimized provides you with a dedicated connection between your EC2 instance and your EBS volume. However, it is still possible to provision EBS volumes that exceed the available bandwidth for certain instance types, especially when multiple volumes are striped in a RAID configuration. 
4. Instance types with 10 Gigabit network connectivity support up to 800 MB/s of throughput and 48,000 16K IOPS for unencrypted Amazon EBS volumes and up to 25,000 16K IOPS for encrypted Amazon EBS volumes. Because the maximum io1 value for EBS volumes is 20,000 for io1 volumes and 10,000 for gp2 volumes, you can use several EBS volumes simultaneously to reach the level of I/O performance available to these instance types.
5. Queue Length on SSD-backed Volumes : Increasing the queue length is beneficial until you achieve the provisioned IOPS, throughput or optimal system queue length value, which is currently set to 32. For example, a volume with 1,000 provisioned IOPS should target a queue length of 2. You should experiment with tuning these values up or down to see what performs best for your application.

I/O Characterstics
1. On a given volume configuration, certain I/O characteristics drive the performance behavior for your EBS volumes. SSD-backed volumes—General Purpose SSD (gp2) and Provisioned IOPS SSD (io1)—deliver consistent performance whether an I/O operation is random or sequential. 
2. HDD-backed volumes—Throughput Optimized HDD (st1) and Cold HDD (sc1)—deliver optimal performance only when I/O operations are large and sequential.
3. To understand how SSD and HDD volumes will perform in your application, it is important to know the connection between demand on the volume, the quantity of IOPS available to it, the time it takes for an I/O operation to complete, and the volume's throughput limits.
4. IOPS are a unit of measure representing input/output operations per second. The operations are measured in KiB, and the underlying drive technology determines the maximum amount of data that a volume type counts as a single I/O. I/O size is capped at 256 KiB for SSD volumes and 1,024 KiB for HDD volumes because SSD volumes handle small or random I/O much more efficiently than HDD volumes.
5. When small I/O operations are physically contiguous, Amazon EBS attempts to merge them into a single I/O up to the maximum size. For example, for SSD volumes, a single 1,024 KiB I/O operation counts as 4 operations (1,024÷256=4), while 8 contiguous I/O operations at 32 KiB each count as 1operation (8×32=256). However, 8 random I/O operations at 32 KiB each count as 8 operations. Each I/O operation under 32 KiB counts as 1 operation.
6. Similarly, for HDD-backed volumes, both a single 1,024 KiB I/O operation and 8 sequential 128 KiB operations would count as one operation. However, 8 random 128 KiB I/O operations would count as 8 operations.
7. Consequently, when you create an SSD-backed volume supporting 3,000 IOPS (either by provisioning an io1 volume at 3,000 IOPS or by sizing a gp2 volume at 1000 GiB), and you attach it to an EBS-optimized instance that can provide sufficient bandwidth, you can transfer up to 3,000 I/Os of data per second, with throughput determined by I/O size.
8. Queue length must be correctly calibrated with I/O size and latency to avoid creating bottlenecks either on the guest operating system or on the network link to EBS.
9. Transaction-intensive applications are sensitive to increased I/O latency and are well-suited for SSD-backed io1 and gp2 volumes. You can maintain high IOPS while keeping latency down by maintaining a low queue length and a high number of IOPS available to the volume. Consistently driving more IOPS to a volume than it has available can cause increased I/O latency.
10. Throughput-intensive applications are less sensitive to increased I/O latency, and are well-suited for HDD-backed st1 and sc1 volumes. You can maintain high throughput to HDD-backed volumes by maintaining a high queue length when performing large, sequential I/O.
11. For SSD-backed volumes, if your I/O size is very large, you may experience a smaller number of IOPS than you provisioned because you are hitting the throughput limit of the volume. For example, a gp2 volume under 1000 GiB with burst credits available has an IOPS limit of 3,000 and a volume throughput limit of 160 MiB/s. If you are using a 256 KiB I/O size, your volume reaches its throughput limit at 640 IOPS (640 x 256 KiB = 160 MiB). For smaller I/O sizes (such as 16 KiB), this same volume can sustain 3,000 IOPS because the throughput is well below 160 MiB/s
12. For smaller I/O operations, you may see a higher-than-provisioned IOPS value as measured from inside your instance. This happens when the instance operating system merges small I/O operations into a larger operation before passing them to Amazon EBS.
13. Whatever your EBS volume type, if you are not experiencing the IOPS or throughput you expect in your configuration, ensure that your EC2 instance bandwidth is not the limiting factor. You should always use a current-generation, EBS-optimized instance (or one that includes 10 Gb/s network connectivity) for optimal performance. 
14. New EBS volumes receive their maximum performance the moment that they are available and do not require initialization (formerly known as pre-warming). However, storage blocks on volumes that were restored from snapshots must be initialized (pulled down from Amazon S3 and written to the volume) before you can access the block. This preliminary action takes time and can cause a significant increase in the latency of an I/O operation the first time each block is accessed. For most applications, amortizing this cost over the lifetime of the volume is acceptable. Performance is restored after the data is accessed once.
15. You can avoid this performance hit in a production environment by reading from all of the blocks on your volume before you use it; this process is called initialization. For a new volume created from a snapshot, you should read all the blocks that have data before using the volume.
16. With Amazon EBS, you can use any of the standard RAID configurations that you can use with a traditional bare metal server, as long as that particular RAID configuration is supported by the operating system for your instance. This is because all RAID is accomplished at the software level. For greater I/O performance than you can achieve with a single volume, RAID 0 can stripe multiple volumes together; for on-instance redundancy, RAID 1 can mirror two volumes together.
17. You should avoid booting from a RAID volume. Grub is typically installed on only one device in a RAID array, and if one of the mirrored devices fails, you may be unable to boot the operating system.
18. RAID 0 : When I/O performance is more important than fault tolerance; for example, as in a heavily used database (where data replication is already set up separately).
Advantages : I/O is distributed across the volumes in a stripe. If you add a volume, you get the straight addition of throughput.
Disadvantages : Performance of the stripe is limited to the worst performing volume in the set. Loss of a single volume results in a complete data loss for the array.
19. RAID 1 : When fault tolerance is more important than I/O performance; for example, as in a critical application.
Advantages : Safer from the standpoint of data durability.
Disadvantages : Does not provide a write performance improvement; requires more Amazon EC2 to Amazon EBS bandwidth than non-RAID configurations because the data is written to multiple volumes simultaneously.
20. Creating a RAID 0 array allows you to achieve a higher level of performance for a file system than you can provision on a single Amazon EBS volume. A RAID 1 array offers a "mirror" of your data for extra redundancy. 
21. The resulting size of a RAID 0 array is the sum of the sizes of the volumes within it, and the bandwidth is the sum of the available bandwidth of the volumes within it. 
22. The resulting size and bandwidth of a RAID 1 array is equal to the size and bandwidth of the volumes in the array. For example, two 500 GiB Amazon EBS volumes with 4,000 provisioned IOPS each will create a 1000 GiB RAID 0 array with an available bandwidth of 8,000 IOPS and 640 MB/s of throughput or a 500 GiB RAID 1 array with an available bandwidth of 4,000 IOPS and 320 MB/s of throughput.

Instance Store : 
1. Many instances can access storage from disks that are physically attached to the host computer. This disk storage is referred to as instance store.
2. Instance store provides temporary block-level storage for instances. The data on an instance store volume persists only during the life of the associated instance; if you stop or terminate an instance, any data on instance store volumes is lost.
3. An instance store consists of one or more instance store volumes exposed as block devices. The size of an instance store varies by instance type. 
4. The virtual devices for instance store volumes are ephemeral[0-23]. 
5. Instance types that support one instance store volume have ephemeral0. Instance types that support two instance store volumes have ephemeral0 and ephemeral1, and so on. While an instance store is dedicated to a particular instance, the disk subsystem is shared among instances on a host computer.
6. You can specify instance store volumes for an instance only when you launch it, though you may be able to resize an instance and add additional ephemeral storage during that process.
7. data in the instance store is lost under the following circumstances:
The underlying disk drive fails
The instance stops
The instance terminates
8. You can't detach an instance store volume from one instance and attach it to a different instance. If you create an AMI from an instance, the data on its instance store volumes isn't preserved and isn't present on the instance store volumes of the instances that you launch from the AMI.
9. You can't make an instance store volume available after you launch the instance.
10. Some instance types use solid state drives (SSD) to deliver very high random I/O performance. This is a good option when you need storage with very low latency, but you don't need the data to persist when the instance terminates or you can take advantage of fault-tolerant architectures.
11. A block device mapping always specifies the root volume for the instance. The root volume is either an Amazon EBS volume or an instance store volume. 
12. The root volume is mounted automatically. For instances with an instance store volume for the root volume, the size of this volume varies by AMI, but the maximum size is 10 GB.
13. You can specify the instance store volumes for your instance only when you launch an instance. You can't attach instance store volumes to an instance after you've launched it.
14. The number and size of available instance store volumes for your instance varies by instance type. Some instance types do not support instance store volumes. 
15. After you launch the instance, you must ensure that the instance store volumes for your instance are formatted and mounted before you can use them. Note that the root volume of an instance store-backed instance is mounted automatically.
16. After you launch an instance, the instance store volumes are available to the instance, but you can't access them until they are mounted.
17. Because of the way that Amazon EC2 virtualizes disks, the first write to any location on most instance store volumes performs more slowly than subsequent writes. For most applications, amortizing this cost over the lifetime of the instance is acceptable. However, if you require high disk performance, we recommend that you initialize your drives by writing once to every drive location before production use.

EFS File System : 
1. Amazon EFS provides scalable file storage for use with Amazon EC2. You can create an EFS file system and configure your instances to mount the file system. You can use an EFS file system as a common data source for workloads and applications running on multiple instances.
2. Amazon S3 provides access to reliable and inexpensive data storage infrastructure. It is designed to make web-scale computing easier by enabling you to store and retrieve any amount of data, at any time, from within Amazon EC2 or anywhere on the web. For example, you can use Amazon S3 to store backup copies of your data and applications. Amazon EC2 uses Amazon S3 to store EBS snapshots and instance store-backed AMIs.
3. Every time you launch an instance from an AMI, a root storage device is created for that instance. The root storage device contains all the information necessary to boot the instance. You can specify storage volumes in addition to the root device volume when you create an AMI or launch an instance using block device mapping.
4. You can use an EFS file system as a common data source for workloads and applications running on multiple instances.
5. Amazon EFS is not supported on Windows instances.
6. Prerequisites
Create a security group (for example, efs-sg) and add the following rules:
Allow inbound SSH connections from your computer (the source is the CIDR block for your network)
Allow inbound NFS connections from EC2 instances associated with the group (the source is the security group itself)
Create a key pair. You must specify a key pair when you configure your instances or you can't connect to them.

S3 : 
1. Amazon S3 is a repository for Internet data. Amazon S3 provides access to reliable, fast, and inexpensive data storage infrastructure. It is designed to make web-scale computing easy by enabling you to store and retrieve any amount of data, at any time, from within Amazon EC2 or anywhere on the web. Amazon S3 stores data objects redundantly on multiple devices across multiple facilities and allows concurrent read or write access to these data objects by many separate clients or application threads. You can use the redundant data stored in Amazon S3 to recover quickly and reliably from instance or application failures.
2. Amazon EC2 uses Amazon S3 for storing Amazon Machine Images (AMIs). You use AMIs for launching EC2 instances. In case of instance failure, you can use the stored AMI to immediately launch another instance, thereby allowing for fast recovery and business continuity.
3. Objects are the fundamental entities stored in Amazon S3. Every object stored in Amazon S3 is contained in a bucket. Buckets organize the Amazon S3 namespace at the highest level and identify the account responsible for that storage. Amazon S3 buckets are similar to Internet domain names.

1. The maximum number of volumes that your instance can have depends on the operating system. When considering how many volumes to add to your instance, you should consider whether you need increased I/O bandwidth or increased storage capacity.
2. Attaching more than 40 volumes to a Linux instance is supported on a best effort basis only and is not guaranteed.
3. We do not recommend that you give a Windows instance more than 26 volumes with AWS PV or Citrix PV drivers, as it is likely to cause performance issues.
4. For RAID configurations, many administrators find that arrays larger than 8 volumes have diminished performance returns due to increased I/O overhead. Test your individual application performance and tune it as required.

Block Device Mapping:
1. Each instance that you launch has an associated root device volume, either an Amazon EBS volume or an instance store volume. 
2. You can use block device mapping to specify additional EBS volumes or instance store volumes to attach to an instance when it's launched. 
3. You can also attach additional EBS volumes to a running instance;
4. However, the only way to attach instance store volumes to an instance is to use block device mapping to attach them as the instance is launched.
5. A block device is a storage device that moves data in sequences of bytes or bits (blocks). These devices support random access and generally use buffered I/O. Examples include hard disks, CD-ROM drives, and flash drives. A block device can be physically attached to a computer or accessed remotely as if it were physically attached to the computer. Amazon EC2 supports two types of block devices:
Instance store volumes (virtual devices whose underlying hardware is physically attached to the host computer for the instance)
EBS volumes (remote storage devices)
6. A block device mapping defines the block devices (instance store volumes and EBS volumes) to attach to an instance. You can specify a block device mapping as part of creating an AMI so that the mapping is used by all instances launched from the AMI. Alternatively, you can specify a block device mapping when you launch an instance, so this mapping overrides the one specified in the AMI from which you launched the instance.
7. Some instance types include more instance store volumes than others, and some instance types contain no instance store volumes at all. If your instance type supports one instance store volume, and your AMI has mappings for two instance store volumes, then the instance launches with one instance store volume.
8. Instance store volumes can only be mapped at launch time. You cannot stop an instance without instance store volumes (such as the t2.micro), change the instance to a type that supports instance store volumes, and then restart the instance with instance store volumes. However, you can create an AMI from the instance and launch it on an instance type that supports instance store volumes, and map those instance store volumes to the instance.
If you launch an instance with instance store volumes mapped, and then stop the instance and change it to an instance type with fewer instance store volumes and restart it, the instance store volume mappings from the initial launch still show up in the instance metadata. However, only the maximum number of supported instance store volumes for that instance type are available to the instance.
When an instance is stopped, all data on the instance store volumes is lost.
Depending on instance store capacity at launch time, M3 instances may ignore AMI instance store block device mappings at launch unless they are specified at launch. You should specify instance store block device mappings at launch time, even if the AMI you are launching has the instance store volumes mapped in the AMI, to ensure that the instance store volumes are available when the instance launches.

Troubleshoot: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-troubleshoot.html (Go thorugh all the cases)
1. After you launch an instance, we recommend that you check its status to confirm that it goes from the pending state to the running state, not the terminated state.
The following are a few reasons why an instance might immediately terminate:
You've reached your EBS volume limit. For information about the volume limit, and to submit a request to increase your volume limit, see Request to Increase the Amazon EBS Volume Limit.
An EBS snapshot is corrupt.
The instance store-backed AMI you used to launch the instance is missing a required part (an image.part.xx file).
2. f you try to connect to your instance and get an error message Network error: Connection timed out or Error connecting to [instance], reason: -> Connection timed out: connect, try the following:
Check your security group rules. You need a security group rule that allows inbound traffic from your public IP address on the proper port.
[EC2-VPC] Check the route table for the subnet. You need a route that sends all traffic destined outside the VPC (0.0.0.0/0) to the Internet gateway for the VPC.
[EC2-VPC] Check the network access control list (ACL) for the subnet. The network ACLs must allow inbound and outbound traffic from your public IP address on the proper port. The default network ACL allows all inbound and outbound traffic.
If your computer is on a corporate network, ask your network administrator whether the internal firewall allows inbound and outbound traffic from your computer on port 22 (for Linux instances) or port 3389 (for Windows instances).
If you have a firewall on your computer, verify that it allows inbound and outbound traffic from your computer on port 22 (for Linux instances) or port 3389 (for Windows instances).
Check that your instance has a public IP address. If not, you can associate an Elastic IP address with your instance. 
Check the CPU load on your instance; the server may be overloaded. AWS automatically provides data such as Amazon CloudWatch metrics and instance status, which you can use to see how much CPU load is on your instance and, if necessary, adjust how your loads are handled. 
3. If you connect to your instance using SSH and get any of the following errors, Host key not found in [directory], Permission denied (publickey), or Authentication failed, permission denied, verify that you are connecting with the appropriate user name for your AMI and that you have specified the proper private key (.pem) file for your instance.
4.  If your private key can be read or written to by anyone but you, then SSH ignores your key and you 
5. The ping command is a type of ICMP traffic — if you are unable to ping your instance, ensure that your inbound security group rules allow ICMP traffic for the Echo Request message from all sources, or from the computer or instance from which you are issuing the command. If you are unable to issue a ping command from your instance, ensure that your outbound security group rules allow ICMP traffic for the Echo Request message to all destinations, or to the host that you are attempting to ping.
6. If you can't force the instance to stop, you can create an AMI from the instance and launch a replacement instance.
7.  If your instance remains in the shutting-down state for several hours, Amazon EC2 treats it as a stuck instance and forcibly terminates it.\
8. After you terminate an instance, it remains visible for a short while before being deleted. The status shows as terminated. If the entry is not deleted after several hours, contact Support.
9. The automatic recovery process will attempt to recover your instance for up to three separate failures per day. If the instance system status check failure persists, we recommend that you manually start and stop the instance.
10. Your instance may subsequently be retired if automatic recovery fails and a hardware degradation is determined to be the root cause for the original system status check failure.
11. Out of memory:
Amazon EBS-backed : Do one of the following:
Stop the instance, and modify the instance to use a different instance type, and start the instance again. For example, a larger or a memory-optimized instance type. Reboot the instance to return it to a non-impaired status. The problem will probably occur again unless you change the instance type.

Instance store-backed :
Do one of the following:
Terminate the instance and launch a new instance, specifying a different instance type. For example, a larger or a memory-optimized instance type.
Reboot the instance to return it to an unimpaired status. The problem will probably occur again unless you change the instance type.
12. I/O error (Block device failure)
Instance type : Amazon EBS-backed Potential cause : A failed Amazon EBS volume
Use the following procedure:
Stop the instance.
Detach the volume.
Attempt to recover the volume.
Note
It's good practice to snapshot your Amazon EBS volumes often. This dramatically decreases the risk of data loss as a result of failure.
Re-attach the volume to the instance.
Detach the volume.
Instance store-backed Potential cause : A failed physical drive
Terminate the instance and launch a new instance.
Note
Data cannot be recovered. Recover from backups.
Note
It's a good practice to use either Amazon S3 or Amazon EBS for backups. Instance store volumes are directly tied to single host and single disk failures.

IO ERROR: neither local nor remote disk (Broken distributed block device) : 
Suggested Action
Terminate the instance and launch a new instance.
For an Amazon EBS-backed instance you can recover data from a recent snapshot by creating an image from it. Any data added after the snapshot cannot be recovered.

13. "FATAL: kernel too old" and "fsck: No such file or directory while trying to open /dev" (Kernel and AMI mismatch)
Amazon EBS-backed
Use the following procedure:
Stop the instance.
Modify the configuration to use a newer kernel.
Start the instance.

Cloud Front : (Blog : Correct all below)
1. CloudFront is a web service that speeds up distribution of static, dynamic web or streaming content to end users.
2. CloudFront delivers the content through a worldwide network of data centers called edge locations
3. CloudFront speeds up the distribution of your content by routing each user request to the edge location that can best serve the content thus providing the lowest latency.
4. CloudFront dramatically reduces the number of network hops that users’ requests must pass through, which helps improves performance, provide lower latency and higher data transfer rates.
5. CloudFront also provides increased reliability and availability because copies of objects are held in multiple edge locations around the world
6. Origin servers need to be configured to get the files for distribution. An origin server stores the original, definitive version of your objects
7.  CloudFront distribution, which tells CloudFront which origin servers to get your files from when users request the files
8. CloudFront sends the distribution configuration to all the edge locations
9. Origin server can be configured to limit access protocols, caching behavior, add headers to the files to add TTL or the expiration time
10. If the request object does not exist in the cache at the edge location, CloudFront requests the object from the Origin server and returns it to the user as soon as it starts receiving it
11. supports both static and dynamic content for e.g. html, css, js, images etc using HTTP or HTTPS.
12. supports multimedia content on demand using progressive download and Apple HTTP Live Streaming (HLS).
13. supports a live event, such as a meeting, conference, or concert, in real time. For live streaming, distribution can be created automatically using an AWS CloudFormation stack.
14. origin servers can be either an Amazon S3 bucket or an HTTP server, for e.g., a web server or an AWS ELB etc
15. supports streaming of media files using Adobe Media Server and the Adobe Real-Time Messaging Protocol (RTMP), must use an Amazon S3 bucket as the origin.
16. To stream media files using CloudFront, two types of files are needed
Media files
Media player for e.g. JW Player, Flowplayer, or Adobe flash
17. End users view media files using the media player that is provided; not the locally installed on the computer of the device
18. When an end user streams the media file, the media player begins to play the file content while the file is still being downloaded from CloudFront.
19. Media file is not stored locally on the end user’s system.
20. Two CloudFront distributions are required, Web distribution for media Player and RMTP distribution for media files
21. Media player and Media files can be stored in same origin S3 bucket or different buckets
22. For HTTP server as the origin, the domain name of the resource needs to be mapped and files must be publicly readable
23. For S3 bucket as origin, use the bucket url or the static website endpoint url and the files either need to be publicly readable or secured using OAI
24. Origin restrict access, for S3 only, can be configured using Origin Access Identity to prevent direct access to the S3 objects
25. Distribution can have multiple origins for each bucket with one or more cache behaviors that route requests to each origin. Path pattern in a cache behavior determines which requests are routed to the origin (the Amazon S3 bucket) that is associated with that cache behavior
26. Viewer Protocol policy can be configured to define the access protocol allowed. Can be either HTTP and HTTPS, or HTTPS only or HTTP redirected to HTTPS
27. Between CloudFront & Viewers, cache distribution can be configured to either allow HTTP or HTTPS requests, or use HTTPS only, or redirect all HTTP request to HTTPS
28. Between CloudFront & Origin, cache distribution can be configured to require that CloudFront fetches objects from the origin by using HTTPS or CloudFront uses the protocol that the viewer used to request the objects.
29. For Amazon S3 as origin,
for website, the protocol has to be HTTP as HTTPS is not supported
for S3 bucket, the default Origin protocol policy is Match Viewer and cannot be changed. So When CloudFront is configured to require HTTPS between the viewer and CloudFront, CloudFront automatically uses HTTPS to communicate with Amazon S3.
30. CloudFront can also be configured to work with HTTPS for alternate domain names by using:-
Serving HTTPS Requests Using Dedicated IP Addresses
CloudFront associates the alternate domain name with a dedicated IP address, and the certificate is associated with the IP address. when a request is received from a DNS server for the IP address, CloudFront uses the IP address to identify the distribution and the SSL/TLS certificate to return to the viewer
This method works for every HTTPS request, regardless of the browser or other viewer that the user is using.
Additional monthly charge is incurred for using dedicated IP address
Serving HTTPS Requests Using SNI
With SNI method, CloudFront associates an IP address with the alternate domain name, but the IP address is not dedicated
CloudFront can’t determine, based on the IP address, which domain the request is for as the IP address is not dedicated
Browsers that support SNI that is an extension to the TLS protocol, automatically gets the domain name from the request URL and adds it to a new field in the request header.
When CloudFront receives an HTTPS request from a browser that supports SNI, it finds the domain name in the request header and responds to the request with the applicable SSL/TLS certificate.
Viewer and CloudFront perform SSL negotiation, and CloudFront returns the requested content to the viewer.
31. CloudFront only caches responses to GET and HEAD requests and, optionally, OPTIONS requests. CloudFront does not cache responses to PUT, POST, PATCH, DELETE request methods and these requests are directed to the origin
32. For RTMP distributions, CloudFront cannot be configured to process
cookies. When CloudFront requests an object from the origin server, it removes any cookies before forwarding the request to your origin. If your origin returns any cookies along with the object, CloudFront
removes them before returning the object to the viewer.
33. For RTMP distributions, CloudFront cannot be configured to cache based on header values.
34. For RTMP distributions : CloudFront keeps objects in edge caches for 24 hours by default.
35. 

27. http://jayendrapatil.com/category/aws/cloudfront/

Instance store-backed
Use the following procedure:
Create an AMI that uses a newer kernel.
Terminate the instance.
Start a new instance from the AMI you created.
Comments