21Clouding
AWS Guide
    • Cloud Computing
      • Amazon EC2
      • Lightsail
      • AWS Batch
      • AWS Beanstalk
      • AWS Lambda
      • AWS Outposts
    • AWS Storage
      • Amazon S3
      • Amazon EBS
      • Amazon EFS
      • Amazon Glacier
      • Storage Gateway
      • AWS Snowball
    • Networking
      • Amazon VPC
      • API Gateway
      • CloudFront
      • Direct Connect
      • Elastic load balancing
      • Route 53
    • Database
      • Amazon RDS
      • DocumentDB
      • DynamoDB
      • ElastiCache
      • Neptune
      • Redshift
    • Management
      • AWS IAM
      • Auto Scaling
      • CloudTrail
      • CloudWatch
      • CloudFormation
    • Container
      • Amazon ECS
      • Amazon EKS

    Recent Posts

    • Amazon Neptune
    • Elastic load balancing
    • AWS Snowball

    Tags

    Amazon API Gateway Amazon CloudFront Amazon CloudWatch Amazon DocumentDB Amazon DynamoDB Amazon EC2 Amazon ECS Amazon EFS Amazon ElastiCache Amazon Elastic Block Store Amazon Elastic Compute Amazon Elastic Kubernetes Service Amazon Glacier Amazon kubernet service Amazon Lightsail Amazon Machine Images Amazon Neptune Amazon RDS Amazon Redshift Amazon Route 53 Amazon S3 Amazon VPC Amazon web services AWS Guideline AWS Auto Scaling AWS Batch AWS CloudFormation AWS CloudTrail AWS Direct Connect AWS Elastic Beanstalk AWS Identity Access Management AWS Lambda AWS Outposts AWS Security group AWS Snowball AWS Storage Gateway Elastic load balancing Identity Access Management

    Amazon Elastic Compute Cloud

    Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable (scalable) compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. It is the central part of Amazon’s cloud-computing platform known as Amazon Web Services (AWS). Unlike traditional data centers, which lease physical resources, Amazon EC2 clouds lease virtualized resources which are mapped and run transparently to the client by the cloud’s virtualization middleware called ‘Xen’. EC2 is an IaaS cloud computing service that opens Amazon’s large computing infrastructure to its clients. The service is elastic in the sense that it enables customers to increase or decrease its infrastructure by launching or terminating new virtual machines known as instances.

    • Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing customers to quickly scale capacity, both up and down, as their computing requirements change.
    • The AWS Nitro System is the underlying platform for AWS next generation of EC2 instances that offloads many of the traditional virtualization functions to dedicated hardware and software to deliver high performance, high availability, and high security while also reducing virtualization overhead. 
    • Customers have complete control over the type of storage they want to use, the network configurations, the security configuration
    Amazon Elastic Compute

    Amazon Elastic Compute Benefits

    Amazon EC2 bare metal instances provide your applications with direct access to the processor and memory of the underlying server. These instances are ideal for workloads that require access to hardware feature sets (such as Intel® VT-x), or for applications that need to run in non-virtualized environments for licensing or support requirements. Bare metal instances are built on the Nitro system, a collection of AWS-built hardware offload and hardware protection components that come together to securely provide high performance networking and storage resources to EC2 instances.

    Amazon EC2 is integrated with most of the AWS services such as S3, VPC, Lambda Redshift, RDS, EMR, and so on. Using EC2 and the other services of AWS, customers can get a complete solution for all of their IT needs. The data center and network architecture of AWS built to meet the requirements of the most security-sensitive organizations. Amazon EC2 works in conjunction with Amazon VPC to provide security and robust networking functionality for it’s customers compute resources.

    Amazon EC2’s simple web service interface allows customers to obtain and configure capacity with minimal friction. It provides them with complete control of their computing resources and lets them run on Amazon’s proven computing environment. Leverage agile frameworks to provide a robust synopsis for high level overviews. Iterative approaches to corporate strategy foster collaborative thinking to further the overall value proposition. 

    Customers have the choice of multiple instance types, operating systems, and software packages. Amazon Elastic Compute Cloud allows its customers to select a configuration of memory, CPU, instance storage, and the boot partition size that is optimal for their choice of operating system and application. Amazon EC2 offers a highly reliable environment where replacement instances can be rapidly and predictably commissioned. .

    Learn more about Amazon Elastic Compute

    EC2 Features

    • Instances:- Amazon EC2 presents a virtual computing environment, allowing its customers to use web service interfaces to launch instances with a variety of operating systems, load them with their custom application environment, manage network’s access permissions, and run their image using as many or few systems as they desire
    • Regions and Availability Zones:- AWS offers multiple physical locations for its customers resources such as instances and Amazon EBS volumes, known as Regions and Availability Zones.
    • Amazon EBS volumes (EBS):- EBS is an easy to use and high performance block storage service designed for use with Amazon EC2 for both throughput and transaction intensive workloads at any scale.
    • Virtual private clouds (VPCs):- Amazon Virtual Private Cloud (Amazon VPC) is a secure and seamless bridge between customers existing IT infrastructure and the AWS cloud. Amazon VPC enables customers to connect their existing infrastructure to a set of isolated AWS compute resources via a Virtual Private Network (VPN) connection.
    • Instance types:- Amazon EC2 provides a large selection of instance types, which can be optimized to fit different use cases. Instance types comprise varying combinations of CPU, memory, storage, and networking capacity and give customers the flexibility to choose the appropriate mix of resources for their applications. Each instance type includes one or more instance sizes, allowing customers to scale their resources to the requirements of their target workload.
    • Key pairs:- Secure login information for your instances using key pairs (AWS stores the public key, and you store the private key in a secure place)
    • Amazon Machine Images (AMIs):-  AMI is a special type of virtual product that is used to create a virtual machine within EC2. It is a Pre-configured templates for customers instances, that package the bits they need for their server which includes the operating system and additional software.
    • Security groups:- A firewall that enables customers to specify the protocols, ports, and source IP ranges that can reach them instances using security groups
    • Tag:- Tags are words or phrases that act as metadata for identifying and organizing your AWS resources. A resource can have up to 50 user-applied tags.
    • Elastic IP addresses:- An Elastic IP address is a static IPv4 address designed for dynamic cloud computing. An Elastic IP address is associated with customers AWS accounts. With an Elastic IP address, AWS customers can mask the failure of an instance or software by rapidly remapping the address to another instance in your account.
    • Instance store volumes:- An AWS instance store is a temporary storage type located on disks that are physically attached to a host machine. Instance stores are made up of single or multiple instance store volumes exposed as block devices. Storage volumes are for temporary data which will be deleted when customers stop or terminate their instance.
    • Flexible Pricing:- Charge of the server is on an hourly basis or per second, such that customers don’t have to pay a huge amount of expense when provision their servers on EC2.
    Amazon Elastic Compute Cloud

    Amazon Machine Image

    An Amazon Machine Image (AMI) is a packaged environment containing a software configuration and other parts that is used to create a virtual machine within the EC2. In other word, an AMI is a template that contains a software configuration from where customers launch instances, which are copies of the AMI running as virtual servers in the cloud.

    • An instance is a virtual server in the cloud. Its configuration at launch is a copy of the AMI that AWS clients specified when they launched the instance. They are able to launch different types of instances from a single AMI. An instance type essentially determines the hardware of the host computer used for customers instance. Each instance type offers different compute and memory capabilities.
    • An AMI defines the initial software that will be on an instance when it is launched. It also defines every aspect of the software state at instance launch, which includes: 
      • The Operating System (OS) and its configuration 
      • The initial state of any patches 
      • Application or system software.
    • Launch permissions control which AWS accounts can use the AMI to launch instances. The owner of an AMI determines its availability by specifying launch permissions. The owner of an AMI determines its availability by specifying launch permissions. There are three types of Launch permissions
      • Public:- where the owner grants launch permissions to all AWS accounts. 
      • Explicit:- The owner grants launch permissions to specific AWS accounts. 
      • Implicit:–The owner has implicit launch permissions for an AMI.

    AMI come in four main categories:

    1. Community AMIs by AWS:—AWS publishes AMIs with versions of many different OSs, both Linux and Windows. Launching an instance based on one of these AMIs will result in the default OS settings, similar to installing an OS from the standard OS ISO image. Free to use, generally customers just select the operating system they want. 
    2. AWS Marketplace AMIs:—AWS Marketplace is an online store that helps customers find, buy, and immediately start using the software and services that run on Amazon EC2. It is used for software providers to sell their products through AWS Marketplace. The customers will be billed by AWS, then AWS will pay the AMI owner in their share of the sale. 
    3. Generated from Existing Instances:—An AMI can be created from an existing Amazon EC2 instance. This is a very common source of AMIs. Customers launch an instance from a published AMI, and then the instance is configured to meet all the customer’s corporate standards for updates, management, security.
    4. My AMIs – Uploaded Virtual Servers:—AMIs that customers create themselves. Using AWS VM Import/Export service, customers can create images from various virtualization formats, including raw, VHD, VMDK, and OVA.
      • VM Import/Export not only enables AWS clients import Virtual Machines (VMs) form their existing environment as an Amazon EC2 instance, but it also export them back to their on-premises environment as the client desire. They also can export imported instances back to their on-premises virtualization infrastructure, allowing them to deploy workloads across their IT infrastructure.
    Learn More
    Amazon Elastic Compute Cloud

    Regions

    The AWS Cloud infrastructure is built around Regions and Availability Zones (AZs). A Region is a physical location in the world with multiple AZs. Availability Zones consist of one or more discrete data centers, each with redundant power and networking, housed in separate facilities that are located on stable flood plains. 

    A Region is a geographical area that it is completely independent, and each Availability Zone is isolated. However,  the Availability Zones in a Region are connected through low-latency links. A Local Zone is the extension of a Region, which is different customer Region. It is AWS infrastructure deployment that places select services closer to clients end users, and provides a high-bandwidth backbone to the AWS infrastructure, that is ideal for latency-sensitive applications

    Since each Amazon Region is designed to be completely isolated from the other Amazon Regions. 

    • It achieves the greatest possible fault tolerance and stability. 
    • It enable customers to replicate data within a region and between regions using private or public Internet connections.
    • It allow customers to retain complete control and ownership over the region in which their data is physically located.

    An AWS account provides multiple Regions so that its customers can launch Amazon EC2 instances in locations that meet their requirements. 

    • The largest AWS region North East US, where N. Virginia has six zones followed by Ohio (three). The rest includes N. California (three), Oregon (three),  Mumbai (two), Seoul (two), Singapore (two), Sydney (three), Tokyo (four), Bahrain Canada Central (two) China Beijing (two), Frankfurt (three), Ireland (three), London (two), and São Paulo (three). Moving forward, new AWS regions will have three or more zones whenever necessary. When a customer create certain resources in a region, they will be asked to choose a zone in which to host that resource.
    Learn More

    Availability Zones

    Availability Zones are physically separate and isolated from each other. AZs span one or more data centers and have direct, low-latency, high throughput and redundant network connections between each other. Availability Zones consist of one or more discrete data centers, each with redundant power, networking, and connectivity, housed in separate facilities. 

    • Availability Zones offer clients the ability to operate production applications and databases that are more highly available, fault tolerant, and scalable.
      • Each AZ is designed as an independent failure zone.
      • Although Availability Zones are isolated the Availability Zones in a Region, however, are connected through low-latency links. 
    • Each AWS Region has multiple Availability Zones and data centers. AWS clients can deploy their applications across multiple Availability Zones in the same region.
      • Availability Zones are connected to each other with fast and private fiber-optic network, which enables applications to automatically fail-over between Availability Zones without interruption.
    • In addition to replicating applications and data across multiple data centers in the same Region using Availability Zones, clients can also choose to further increase redundancy and fault tolerance by replicating data between geographic Regions. 
      • They can do so using both private and public Networks to provide an additional layer of business continuity, or to provide low latency access across the globe.
    • Each Availability Zone is designed as an independent failure zone. This means that Availability Zones are physically separated within a typical metropolitan region and are located in lower risk flood areas.
      • An Availability Zone is represented by a region code followed by a letter identifier; for example, us-east-1a.
      • Inorder to coordinate Availability Zones across accounts, clients need to use the AZ ID, that is a unique and consistent identifier for an Availability Zone. 
        • use1-az1 is an AZ ID for the us-east-1.
      • Viewing AZ IDs enables customers to determine the location of resources in one account relative to the resources in another account.
    • When an instance launched, AWS clients can select an Availability Zone or let AWS choose one for them. Distributing instances across multiple Availability Zones enable customers use the other inst incase one instance fails. They can design their application so that an instance in another Availability Zone can handle requests.
    • They can also use Elastic IP addresses to mask the failure of an instance in one Availability Zone by rapidly remapping the address to an instance in another Availability Zone.

    Local Zones

    Availability Zones are physically separate and isolated from each other. AZs span one or more data centers and have direct, low-latency, high throughput and redundant network connections between each other. Availability Zones consist of one or more discrete data centers, each with redundant power, networking, and connectivity, housed in separate facilities. 

    • Availability Zones offer clients the ability to operate production applications and databases that are more highly available, fault tolerant, and scalable.
      • Each AZ is designed as an independent failure zone.
      • Although Availability Zones are isolated the Availability Zones in a Region, however, are connected through low-latency links. 
    • Each AWS Region has multiple Availability Zones and data centers. AWS clients can deploy their applications across multiple Availability Zones in the same region.
      • Availability Zones are connected to each other with fast and private fiber-optic network, which enables applications to automatically fail-over between Availability Zones without interruption.
    • In addition to replicating applications and data across multiple data centers in the same Region using Availability Zones, clients can also choose to further increase redundancy and fault tolerance by replicating data between geographic Regions. 
      • They can do so using both private and public Networks to provide an additional layer of business continuity, or to provide low latency access across the globe.
    • Each Availability Zone is designed as an independent failure zone. This means that Availability Zones are physically separated within a typical metropolitan region and are located in lower risk flood areas.
      • An Availability Zone is represented by a region code followed by a letter identifier; for example, us-east-1a.
      • Inorder to coordinate Availability Zones across accounts, clients need to use the AZ ID, that is a unique and consistent identifier for an Availability Zone. 
        • use1-az1 is an AZ ID for the us-east-1.
      • Viewing AZ IDs enables customers to determine the location of resources in one account relative to the resources in another account.
    • When an instance launched, AWS clients can select an Availability Zone or let AWS choose one for them. Distributing instances across multiple Availability Zones enable customers use the other inst incase one instance fails. They can design their application so that an instance in another Availability Zone can handle requests.
    • They can also use Elastic IP addresses to mask the failure of an instance in one Availability Zone by rapidly remapping the address to an instance in another Availability Zone.

    Amazon Web Services 

    What is AWS?

    Amazon Web Services (AWS) is cloud computing service, that offering IT infrastructure services to businesses as web services. the major key benefits of cloud computing is the opportunity to replace upfront capital infrastructure expenses with low variable costs that scale with the business. AWS provides a highly reliable, scalable, low-cost infrastructure platform in the cloud that powers hundreds of thousands of businesses in 190 countries around the world.

    Amazon Web Services offers a broad set of global cloud-based products including compute, storage, databases, analytics, networking, mobile, developer tools, management tools, IoT, security, and enterprise applications: on-demand, available in seconds, with pay-as-you-go pricing. From data warehousing to deployment tools, directories to content delivery, offers over 140 AWS services such as EC2, Lightsail, database and many more. New services can be provisioned quickly, without the upfront capital expense. This allows enterprises, start-ups, small and mediumsized businesses, and customers in the public sector to access the building blocks they need to respond quickly to changing business requirements. This whitepaper provides you with an overview of the benefits of the AWS Cloud and introduces you to the services that make up the platform.

    How to create an AWS account

    Step 1 To create an AWS account Enter you email address and password
    1. Open the Amazon Web Services home page…. 2. Choose Create an AWS Account. … . 3. Enter your account information, and then choose Continue.
    creating AWS account
    4. Choose Personal or Professional..Account. … 5. Enter your company or personal information….. 6. Read and accept the AWS Customer Agreement….. 7. Choose Create Account and Continue.
    Create an AWS Account
    8. You receive an email to confirm that your account is created. You can sign in to your new account using the email address and password you registered with. However, you can’t use AWS services until you finish activating ….. 9. On the Payment Information page, enter the information about your payment method, and then choose Verify and Add.
    Step 4 Select a Support Plan page
    10. Choose your country or region code from the list…. 11. Enter a phone number where you can be reached in the next few minutes….. 12. Enter the code displayed in the CAPTCHA, and then submit….. 13. In a few moments, an automated system contacts you.
    Create an AWS Account.
    14. Enter the PIN you receive, and then choose Continue.
    Create an AWS Account.
    15. On the Select a Support Plan page, choose one of the available Support plans. For a description of the available Support plans and their benefits, see Compare AWS Support plans.
    Create an AWS Account.
    16. After you choose a Support plan, a confirmation page indicates that your account is being activated Accounts are usually activated within a few minutes, but the process might take up to 24 hours….. 17. You can sign in to your AWS account during this time.
    Amazon Elastic Compute Cloud
    18. Enter email address or user ID, then Next
    Amazon Elastic Compute Cloud
    19. Enter your password and click sing in

    Security Group

    A security group acts as a virtual firewall for customers instance to control inbound and outbound traffic. Security groups allow customers to control traffic based on port, protocol, and source/destination. 

    • A security group is default deny; that is, it does not allow any traffic that is not explicitly allowed by a security group rule, which is defined by the three attributes 
      • Port:– The port number affected by this rule. For instance, port 80 for HTTP traffic. 
      • Protocol:– The communications standard for the traffic affected by this rule. Source/Destination Identifies the other end of the communication, the source for incoming traffic rules, or the destination for outgoing traffic rules. 
      • The source/destination:– can be defined in two ways: CIDR block—An x.x.x.x/x style definition that defines a specific range of IP addresses. 
    • Security group includes any instance that is associated with the given security group. This helps prevent coupling security group rules with specific IP addresses.
    • Security groups act at the instance level, not the subnet level. Therefore, each instance in a subnet in their VPC can be assigned to a different set of security groups.
    • For each security group, customers add rules that control the inbound traffic to instances, and a separate set of rules that control the outbound traffic. 
    • Customers can add or remove rules for a security group (also referred to as authorizing or revoking inbound or outbound access). A rule applies either to inbound traffic (ingress) or outbound traffic (egress). 
    • If the customers VPC has a VPC peering connection with another VPC, a security group rule can reference another security group in the peer VPC. 
    • Security group is the firewall of EC2 Instances
    • Security groups are tied to an instance
    • Security groups has to be assigned explicitly to the instance. This means any instances within the subnet group gets the rule applied. If you have many instances, managing the firewalls using Network ACL can be very useful. Otherwise, with Security group, you have to manually assign a security group to the instances.
    • Security groups are stateful: This means any changes applied to an incoming rule will be automatically applied to the outgoing rule. 
    • Security group support allow rules only (by default all rules are denied). e.g. You cannot deny a certain IP address from establishing a connection.
    • All rules in a security group are applied whereas i.e. Security groups evaluate all the rules in them before allowing a traffic 
    • Security group first layer of defense

    A security group acts as a virtual firewall for customers EC2 instances to control incoming and outgoing traffic. Inbound rules control the incoming traffic to the instance, and outbound rules control the outgoing traffic from your instance. When customers launch an instance, they can specify one or more security groups.

    If a security group was not specified, Amazon EC2 uses the default security group. Additional rules to each security group that allow traffic to or from its associated instances was allowed. New and modified rules are automatically applied to all instances that are associated with the security group. When Amazon EC2 decides whether to allow traffic to reach an instance, it evaluates all of the rules from all of the security groups that are associated with the instance.

    The rules of a security group control the inbound traffic that’s allowed to reach the instances that are associated with the security group. The rules also control the outbound traffic that’s allowed to leave them.

    How to create a security group in AWS

    1. Open the Amazon EC2 console.
    2. From the navigation bar, select a Region for the security group. Security groups are specific to a Region, so you should select the same Region in which you created your key pair.
    3. In the navigation pane, choose Security Groups.
    4. Choose Create security group.
    5. In the Basic details section, do the following:
      1. Enter a name for the new security group and a description. Use a name that is easy for you to remember, such as your user name, followed by _SG_, plus the Region name. For example, me_SG_uswest2.
      2. In the VPC list, select your default VPC for the Region.
    6. In the Inbound rules section, create the following rules (choose Add rule for each new rule):
      • Choose HTTP from the Type list, and make sure that Source is set to Anywhere (0.0.0.0/0).
      • Choose HTTPS from the Type list, and make sure that Source is set to Anywhere (0.0.0.0/0).
      • Choose SSH from the Type list. In the Source box, choose My IP to automatically populate the field with the public IPv4 address of your local computer. Alternatively, choose Custom and specify the public IPv4 address of your computer or network in CIDR notation. .
    7. Choose Create security group.
    Keep Reading
    Step 1-Open the Amazon EC2 console
    Open the Amazon EC2 console.
    Step 2-select a Region for the security group
    select a Region for the security group
    Step 3-In the navigation pane, choose Security Groups.
    Choose Create security group.
    Step 4-Choose Create security group
    Enter a name for the new security group and a description.
    Step 5-Choose Create security group.
    After adding Inbound the rules Choose Create security group.

    key pair

    A key pair is a private key and a public key that consist of a set of security credentials that AWS clients use to prove their identity when connecting to an instance. Amazon EC2 is responsible of storing the public key, and and the client is responsible for storing the private key. Amazon EC2 provides scalable computing capacity in the Amazon Web Services Cloud. Using Amazon EC2 eliminates customers need to invest in hardware up front, so they can develop and deploy applications faster. AWS customers can use Amazon EC2 to launch as many or as few virtual servers as they need, configure security and networking, and manage storage.

    Amazon EC2 uses public-key cryptography to encrypt and decrypt login information. At the basic level, a sender uses a public key to encrypt data, which its receiver then decrypts using another private key. These two keys, public and private, are known as a key pair.

    • Key pairs can be created through the AWS Management Console, CLI, or API, or customers can upload their own key pairs. AWS stores the public key, and the private key is kept by the customer.
    • Public-key cryptography enables customers to securely access their instances using a private key instead of a password.
    • Linux instances do not have a password already set and customers must use the key pair to log in to Linux instances. 
    • On Windows instances, customers need the key pair to decrypt the administrator password. Using the decrypted password, they can use RDP and then connect to their Windows instance. 
    • Amazon EC2 stores only the public key, thus customers either need to generate it inside Amazon EC2 or import it. Since the private key is not stored by Amazon, it’s advisable to store it in a secure place as anyone who has this private key can access the AWS account.
    • When launching an instance, customers need to specify the name of the key pair that they plan to use to connect to the instance. 
    • Customers also must specify the private key that corresponds to the key pair they specified when they launched the instance.

    Instance Metadata (TAGS)

    Instance metadata is data about customers instance that they can use to configure or manage the running instance. Instance metadata is divided into categories such as, host name, events, and security groups.

    • Amazon Web Services allows customers to assign metadata to their AWS resources in the form of tags. Each tag is a simple label consisting of a customer-defined key and an optional value that can make it easier to manage, search for, and filter resources. Although there are no inherent types of tags, they enable customers to categorize resources by purpose, owner, environment, or other criteria.
    • The AWS Management Console is organized by AWS service, allows customers to create a custom console that organizes and consolidates AWS resources based on one or more tags or portions of tags. Using this tool, customers can consolidate and view data for applications that consist of multiple services and resources in one place.
    •  

    Best Practices for Tags 

    • Employ a Cross-Functional Team to Identify Tag Requirements
    • Use Tags Consistently.
    • Consider tags from a cost/benefit perspective when deciding on a list of required.  
    • Adopt a Standardized Approach for Tag Names and Names for AWS tags are case sensitive.
    • Use Both Linked Accounts and Cost Allocation Tags
    • Avoid Multi-Valued Cost: Allocation Tags For shared resources.  
    • Tag Everything
    Amazon Elastic Compute Cloud
    Step 1 Open the Amazon EC2 console.
    Amazon Elastic Compute Cloud
    Step 2-In the navigation pane, choose Key Pairs.
    Amazon Elastic Compute Cloud
    Step 3-Select the region
    Amazon Elastic Compute Cloud
    Step 4-Choose Create key pair
    Amazon Elastic Compute Cloud
    Step 5-enter a descriptive name for the key pair.
    Amazon Elastic Compute Cloud
    Step 6-Choose Create key pair.

    Steps how to create a key pair

    1. Open the Amazon EC2 console.
    2. In the navigation pane, choose Key Pairs.
    3. Choose Create key pair.
    4. For Name, enter a descriptive name for the key pair. Amazon EC2 associates the public key with the name that was specified as the key name. A key name can include up to 255 ASCII characters. It can’t include leading or trailing spaces.
    5. For File format, choose the format in which to save the private key. To save the private key in a format that can be used with OpenSSH, choose pem. To save the private key in a format that can be used with PuTTY, choose ppk.
    6. Choose Create key pair.
    7. The private key file is automatically downloaded by the browser. Save the private key file in a safe place.

    EC2 instances

    The  type of instance that client specify determines the hardware of the host computer used for their instance. Each instance type offers different compute, memory, and storage capabilities and are grouped in instance families based on these capabilities. Each instance type provides higher or lower minimum performance from a shared resource.

    General purpose instances

    General purpose instances provide a balance of compute, memory and networking resources, and can be used for a variety of diverse workloads. These instances are ideal for applications that use these resources in equal proportions such as web servers and code repositories. 

    • Amazon EC2 A1 instances deliver significant cost savings and are ideally suited for scale-out and Arm-based workloads that are supported by the extensive Arm ecosystem. Powered by the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor.
    • T3 and T3a instances are the next generation burstable general-purpose instance type that provide a baseline level of CPU performance with the ability to burst CPU usage at any time for as long as required. T3 instances offer a balance of compute, memory, and network resources and are designed for applications with moderate CPU usage that experience temporary spikes in use.
    • T2 instances are Burstable Performance Instances that provide a baseline level of CPU performance with the ability to burst above the baseline.
    • Amazon EC2 M6g instances are powered by Arm-based AWS Graviton2 processors. They deliver up to 40% better price/performance over current generation M5 instances and offer a balance of compute, memory, and networking resources for a broad set of workloads.
      • Custom built AWS Graviton2 Processor with 64-bit Arm Neoverse cores 
      • Support for Enhanced Networking with Up to 25 Gbps of Network bandwidth

    Compute Optimised

    Compute Optimized instances are ideal for compute bound applications that benefit from high performance processors. Instances belonging to this family are well suited for batch processing workloads, media transcoding, high performance web servers, high performance computing (HPC), scientific modeling, dedicated gaming servers and ad server engines, machine learning inference and other compute intensive applications.

    • C5n instances are ideal for high compute applications (including High Performance Computing (HPC) workloads, data lakes, and network appliances such as firewalls and routers) that can take advantage of improved network throughput and packet rate performance. C5n instances offers up to 100 Gbps network bandwidth and increased memory over comparable C5 instances.
    • C5 instances are optimized for compute-intensive workloads and deliver cost-effective high performance at a low price per compute ratio. C5 instances offer a choice of processors based on the size of the instance.
      • C5 instances are ideal for applications where you prioritize raw compute power, such as gaming servers, scientific modeling, high-performance web servers, and media transcoding. 
    • C4 instances are the latest generation of Compute-optimized instances, featuring the highest performing processors and the lowest price/compute performance in EC2

    Memory Optimized

    Memory optimized instances are designed to deliver fast performance for workloads that process large data sets in memory.

    • Amazon EC2 z1d instances offer both high compute capacity and a high memory footprint. High frequency z1d instances deliver a sustained all core frequency of up to 4.0 GHz, the fastest of any cloud instance.
    • X1 and X1e instances are optimized to provide a high ratio of memory to compute with the X1e family delivering the highest memory to compute ratio among EC2 offerings.
      • These instances are used for the highest need memory-intensive applications such as SAP HANA, providing a strong foundation for real-time applications.
      • Instances are optimized for large-scale, enterprise-class, in-memory applications and high-performance databases, and have the lowest price per GiB of RAM among Amazon EC2 instance types.
    • High Memory instances have the greatest amount of available RAM, providing 6 TB, 9 TB, or 12 TB of memory in a single instance. Like X1 and X1e, these are suited to production deployments of hugely memory intensive, real-time databases such as SAP HANA.
    • R4 instances are optimized for memory-intensive applications and offer better price per GiB of RAM than R3. The RAM sizes are a step below the X1s.
    • R5 and R5a are respectively the Intel and AMD offerings of “regular” memory optimized instances. These instances are ideal for memory intensive applications such as real-time big data analytics, large in-memory caches, and high-performance databases. The R5 and R5a instances benefit from the AWS Nitro System, which gives you access to almost all of the compute and memory resources of a server (i.e. allocating as little as possible to the OS). This optimization allows for lower cost when compared on a per-GiB basis to competitors
    amazon-elastic-cloud-compute. memeory optimized

    Storage Optimised

    Storage optimized instances are designed for workloads that require high, sequential read and write access to very large data sets on local storage. They are optimized to deliver tens of thousands of low-latency, random I/O operations per second (IOPS) to applications.

    • H1 and D2 instances feature up to 16 TB 48 TB of HDD-based local storage respectively, both deliver high disk throughput, and a balance of compute and memory. D2 instances offer the lowest price per disk throughput performance on Amazon EC2.
    • I3 and I3en These instance family provides Non-Volatile Memory Express (NVMe) SSD-backed instance storage optimized for low latency, very high random I/O performance, high sequential read throughput (I3) and provide high IOPS, high sequential disk throughput (I3en), and offers the lowest price per GB of SSD instance storage on Amazon EC2.
    amazon-elastic-cloud-compute

    Memory Optimized

    Memory optimized instances are designed to deliver fast performance for workloads that process large data sets in memory.

    • Amazon EC2 z1d instances offer both high compute capacity and a high memory footprint. High frequency z1d instances deliver a sustained all core frequency of up to 4.0 GHz, the fastest of any cloud instance.
    • X1 and X1e instances are optimized to provide a high ratio of memory to compute with the X1e family delivering the highest memory to compute ratio among EC2 offerings. 
      • These instances are used for the highest need memory-intensive applications such as SAP HANA, providing a strong foundation for real-time applications.
      • Instances are optimized for large-scale, enterprise-class, in-memory applications and high-performance databases, and have the lowest price per GiB of RAM among Amazon EC2 instance types.
    • High Memory instances have the greatest amount of available RAM, providing 6 TB, 9 TB, or 12 TB of memory in a single instance. Like X1 and X1e, these are suited to production deployments of hugely memory intensive, real-time databases such as SAP HANA.
    • R4 instances are optimized for memory-intensive applications and offer better price per GiB of RAM than R3. The RAM sizes are a step below the X1s.
    • R5 and R5a are respectively the Intel and AMD offerings of “regular” memory optimized instances. These instances are ideal for memory intensive applications such as real-time big data analytics, large in-memory caches, and high-performance databases. The R5 and R5a instances benefit from the AWS Nitro System, which gives you access to almost all of the compute and memory resources of a server (i.e. allocating as little as possible to the OS). This optimization allows for lower cost when compared on a per-GiB basis to competitors
    LEARN MORE

    Pricing

    AWS provides different families of instance types based on different needs. Some families support general-purpose computing, while others are optimized for processing, memory, storage, and other purposes.Within each family, different sizes of instances offer increasing levels of processing power, available memory, storage capacity, and network bandwidth.

    • Amazon EC2 is free to try. There are four ways to pay for Amazon EC2 instances: 
      • On-Demand, 
      • Reserved Instances, and 
      • Spot Instances. 
      • Customers can also pay for Dedicated Hosts which provide them with EC2 instance capacity on physical servers dedicated to your use.

    On-Demand Instance

    • With On-Demand instances, users pay for computing capacity by per hour or per second depending on which instances they run. 
    • Applications with short term, spiky, or unpredictable workloads that cannot be interrupted.
    • Applications being developed or tested on EC2 for the first time.
    • This is the most flexible pricing option, as it requires no up-front commitment, and the customer has control over when the instance is launched and when it is terminated. 
    • It is the least cost-effective of the three pricing options per compute hour, but its flexibility allows customers to save by provisioning a variable level of computing for unpredictable workloads.

    Reserved Instance

    • Reserved Instances provide customers with a significant discount (up to 75%) compared to On-Demand instance pricing. 
    • For applications that have steady-state or predictable usage, require reserved capacity or can commit to using EC2 for a 1 or 3 year period, Reserved Instances can provide significant savings compared to using On-Demand instances. 
    • The Reserved Instance pricing option enables customers to make capacity reservations for predictable workloads. By using Reserved Instances for these workloads, customers can save up to 75 percent over the on-demand hourly rate. 

    An additional benefit is that capacity in the AWS data centers is reserved for that customer. There are two factors that determine the cost of the reservation: the term commitment (The amount of the discount is greater the more the customer pays upfront), and the payment option (All Upfront, Partial Upfront, No Upfront

    Spot Instance

    • Amazon EC2 Spot instances allow users to bid on spare Amazon EC2 computing capacity for up to 90% off the On-Demand price. 
      • Spot instances are recommended for applications that have flexible start and end times, applications that are only feasible at very low compute prices or users with urgent computing needs for large amounts of additional capacity.
    • Spot instances are for workloads that are not time-critical and are tolerant of interruption, Spot Instances offer the greatest discount. 
    • With Spot Instances, customers specify the price they are willing to pay for a certain instance type. When the customer’s bid price is above the current Spot price, the customer will receive the requested instance(s). 
    • These instances will operate like all other Amazon EC2 instances. The instances will run until 
      • The customer terminates them. 
      • The Spot price goes above the customer’s bid price. 
      • There is not enough unused capacity to meet the demand for Spot Instances.
    LEARN MORE

    EC2 Dedicated Host

    An Amazon EC2 Dedicated Host is a physical server with EC2 instance capacity fully dedicated to your use. Dedicated Hosts allow you to use your existing per-socket, per-core, or per-VM software licenses, including Windows Server, Microsoft SQL Server, SUSE, and Linux Enterprise Server.

    Amazon EC2 Dedicated Hosts allow AWS customers to use eligible software licenses from vendors such as Microsoft and Oracle on Amazon EC2, so that you get the flexibility and cost effectiveness of using their own licenses, but with the resiliency, simplicity and elasticity of AWS. An Amazon EC2 Dedicated Host is a physical server fully dedicated for customers use.

    Dedicated Hosts allow customers to use their existing per-socket, per-core, or per-VM software licenses, including Windows Server, SQL Server, SUSE Linux Enterprise Server, Red Hat Enterprise Linux, or other software licenses that are bound to VMs, sockets, or physical cores, subject to the license terms. This helps AWS customers to save money by leveraging their existing investments. 

    How to create an EC2 Instance?

    1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
    2. Choose Launch Instance.
    3. Choose an Amazon Machine Image (AMI), find an Amazon Linux AMI at the top of the list and choose Select.
    4. Choose an Instance Type, choose Next: Configure Instance Details.
    5. Configure Instance Details, provide the following information:
      • For Network, choose the entry for the same VPC that you noted when you created your EFS file system in Step 1: Create Your Amazon EFS File System.
      • For Subnet, choose a default subnet in any Availability Zone.
      • For File systems, make sure that the EFS file system that you created in Step 1: Create Your Amazon EFS File System is selected. The path shown next to the file system ID is the mount point that the EC2 instance will use, which you can change. Choose Add to user data to mount the file system when the EC2 is launched.
      • Under Advanced Details, confirm that the user data is present in User data.
    6. Choose Next: Add Storage.
    7. Choose Next: Add Tags.
    8. Name your instance and choose Next: Configure Security Group.
    9. Configure Security Group, set Assign a security group to Select an existing security group. Choose the default security group to make sure that it can access your EFS file system.You can’t access your EC2 instance by Secure Shell (SSH) using this security group. SSH access isn’t required for this exercise. To add access by SSH later, you can edit the default security and add a rule to allow SSH. Or you can create a new security group that allows SSH. You can use the following settings to add SSH access:
      • Type: SSH
      • Protocol: TCP
      • Port Range: 22
      • Source: Anywhere 0.0.0.0/0

    10. Choose Review and Launch.

    11. Choose Launch.

    12. Select the check box for the key pair that you created, and then choose Launch Instances.

    13. In the Amazon EC2 console, select the instance, and then choose Connect.

    14. In the Connect To Your Instance dialog box, choose Get Password (it will take a few minutes after the instance is launched before the password is available).

    15. Choose Browse and navigate to the private key file you created when you launched the instance. Select the file and choose Open to copy the entire contents of the file into the Contents field.

    16. Choose Decrypt Password. The console displays the default administrator password for the instance in the Connect To Your Instance dialog box, replacing the link to Get Password shown previously with the actual password.

    17. Record the default administrator password, or copy it to the clipboard. You need this password to connect to the instance.

    18. Choose Download Remote Desktop File. Your browser prompts you to either open or save the .rdp file.

    19. You may get a warning that the publisher of the remote connection is unknown. You can continue to connect to your instance.

    20. When prompted, log in to the instance, using the administrator account for the operating system Enter the password that you recorded or copied previously.

    21.  To verify the identity of the remote computer, or simply choose ok

    22. Choose Yes in the Remote Desktop Connection window to connect to your instance.

    Step 1-min
    Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
    Step 2-Select Instance
    Step 2-Select Instance
    Step 3-Choose Launch Instance.
    Step 3-Choose Launch Instance.
    Step 4-Choose an Amazon Machine Image (AMI), find an Amazon Linux AMI at the top of the list and choose Select.
    Step 4-Choose an Amazon Machine Image (AMI), find an Amazon Linux AMI at the top of the list and choose Select.
    Step 5-Choose an Instance Type, choose Next: Configure Instance Details.
    Step 5-Choose an Instance Type, choose Next: Configure Instance Details.
    Step 6-Configure Instance Details, provide the following information
    Step 6-Configure Instance Details, provide the following information
    Step 7-Choose Next: Add Tags.
    Step 7-Choose Next: Add Tags.
    Step 8-Name your instance and choose Next: Configure Security Group.
    Step 8-Name your instance and choose Next: Configure Security Group.
    Step 9-Choose Review and Launch
    Step 9-Choose Review and Launch
    Step 10-Choose Launch.
    Step 10-Choose Launch.
    Step 11-Select the check box for the key pair that you created, and then choose Launch Instances.
    Step 11-Select the check box for the key pair that you created, and then choose Launch Instances.
    Step 12-In the Amazon EC2 console, select the instance, and then choose Connect.
    Step 12-In the Amazon EC2 console, select the instance, and then choose Connect.
    Step 13-In the Connect To Your Instance dialog box, choose Get Password
    Step 13-In the Connect To Your Instance dialog box, choose Get Password
    Step 14-Choose Browse and navigate to the private key file you created when you launched the instance.
    Step 14-Choose Browse and navigate to the private key file you created when you launched the instance.
    Step 15-Choose Browse and navigate to the private key file you created when you launched the instance.
    Step 15-Choose Browse and navigate to the private key file you created when you launched the instance.
    Step 16-Choose Decrypt Password.
    Step 16-Choose Decrypt Password.
    Step 17-Get Password shown previously with the actual password.
    Step 17-Get Password shown previously with the actual password.
    Step 18-Choose Download Remote Desktop File. Your browser prompts you to either open or save the .rdp file
    Step 18-Choose Download Remote Desktop File. Your browser prompts you to either open or save the .rdp file
    Step 19-by clicking the saved .rdp file connect to the instance.
    Step 19-by clicking the saved .rdp file connect to the instance.
    Step_20 You can continue to connect to your instance.
    Step_20 You can continue to connect to your instance.
    Step_21_Enter the password that you recorded or copied previously.
    Step_21_Enter the password that you recorded or copied previously.
    Step_22_Choose Yes in the Remote Desktop Connection window to connect to your instance.
    Step_22_Choose Yes in the Remote Desktop Connection window to connect to your instance.
    your New Amazon instance
    Your New Amazon instance

    Identity Access Management

    identity access management IAM
    • What is identity access management?

      Identity Access Management (IAM) is the most widely used AWS service. Amazon Web Services (AWS) offers high level data protection when compared to an on-premises environment, at a lower cost. It enables secure control access to AWS resources and services for the customers. Customers can create and manage AWS users as well as groups, and provides necessary permissions to allow or deny access to AWS resources. In a simple term IAM provides an infrastructure necessary to control authentication and authorization for its customers account.

      Resource policies allow customers to granularly control who is able to access a specific resource and how they are able to use it across the entire cloud environment. With one click in the IAM console, customers can enable IAM Access Analyzer across their account to continuously analyze permissions granted using policies associated with their Amazon S3 buckets, AWS KMS keys, Amazon SQS queues, AWS IAM roles, and AWS Lambda functions.

      IAM Access Analyzer continuously monitors policies for changes, meaning AWS customers no longer need to rely on intermittent manual checks in order to catch issues as policies are added or updated. Using IAM Access Analyzer, they can proactively address any resource policies that violate their security and governance best practices around resource sharing and protect their resources from unintended access. IAM Access Analyzer delivers comprehensive, detailed findings through the AWS IAM, Amazon S3, and AWS Security Hub consoles and also through its APIs. Findings can also be exported as a report for auditing purposes. IAM Access Analyzer findings provide definitive answers of who has public and cross-account access to AWS resources from outside an account.

      AWS Identity Access Management Capabilities

      AWS Identity and Access Management (IAM) enables customers to securely control access to AWS services and resources for their users.

      Using IAM, customers can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources.

       IAM makes it easy to provide multiple users secure access to AWS resources.

    • What is Attribute-based access control (ABAC)?

      Attribute-based access control (ABAC) is an authorization strategy that defines permissions based on attributes. In AWS, these attributes are called tags. Tags can be attached to IAM principals (users or roles) and to AWS resources. You can create a single ABAC policy or small set of policies for your IAM principals. These ABAC policies can be designed to allow operations when the principal’s tag matches the resource tag. ABAC is helpful in environments that are growing rapidly and helps with situations where policy management becomes cumbersome.

      For example, you can create three roles with the access-project tag key. Set the tag value of the first role to Heart, the second to Sun, and the third to Lightning. You can then use a single policy that allows access when the role and the resource are tagged with the same value for access-project. For a detailed tutorial that demonstrates how to use ABAC in AWS, see IAM Tutorial: Define permissions to access AWS resources based on tags.

    • What are IAM features?

      Multi Factor Authentication :-Customers can add two-factor authentication to their account and to individual customers (users) for extra security.

      With MFA customers or their users must provide not only a password or access key to work with user account, but also a code from a specially configured device.

      By using the Multi-factor authentication customers can easily add the two-factor authentication not only for their account but also for the individual users for more security.

      AWS Identity and Access Management (IAM) lets customers manage several types of long-term security credentials for IAM users using the following

      • Passwords:- Used to sign in to secure AWS pages, such as the AWS Management Console and the AWS Discussion Forums.
      • Access keys:- Used to make programmatic calls to AWS from the AWS APIs, AWS CLI, AWS SDKs, or AWS Tools for Windows PowerShell.
      • Amazon CloudFront key pairs:- Used for CloudFront to create signed URLs.
      • SSH public keys:- Used to authenticate to AWS CodeCommit repositories.
      • X.509 certificates:- Used to make secure SOAP-protocol requests to some AWS services.

      Manage federated users and their permissions :- Customers can enable identity federation to allow existing identities (users, groups, and roles) in their enterprise to access the AWS Management Console, call AWS APIs, and access resources, without the need to create an IAM user for each identity.

      • Access and Federation :– User can grant other people permission to administer and use resources in your AWS account without having to share your password or access key.

      AWS offers multiple options for federating customers identities in AWS. One of them being  AWS Identity and Access Management (IAM) which enable users to sign in to their AWS accounts with their existing given credentials.

      Manage IAM users:- AWS clients can grant other people permission to administer and use resources in their AWS account without having to share their password or access key. They can also create users in IAM, assign them individual security credentials (such as access keys, passwords, and multi-factor authentication devices), or request temporary security credentials to provide users access to AWS services and resources. You can manage permissions in order to control which operations a user can perform. IAM users can be:

      • Privileged administrators who need console access to manage your AWS resources.
      • End users who need access to content in AWS.
      • Systems that need privileges to programmatically access your data in AWS.

      Securing Application Access:- AWS Identity and Access Management (IAM) helps customers control access and permissions to thier AWS services and resources, including compute instances and storage buckets. they also can use IAM features to securely give applications that run on EC2 instances the credentials that they need in order to access other AWS resources, like

      • S3 buckets and RDS
      • DynamoDB databases.

      The AWS Security Token Service (STS):- IAM roles allow customers to delegate access to users or services that normally don’t have access to their organization’s AWS resources. IAM users or AWS services can assume a role to obtain temporary security credentials that can be used to make AWS API calls. In other word AWS customers don’t have to share long-term credentials or define permissions for each entity that requires access to a resource. Using the AWS Security Token Service (STS), that is a web service that enables customers to request temporary, limited-privilege credentials for IAM users or for users that they authenticate (federated users).

      Customers do not have to distribute or embed long-term AWS security credentials with an application.

      • Customers can provide access to their AWS resources to users without having to define an AWS identity for them.
      • The temporary security credentials have a limited lifetime.
      • After temporary security credentials expire, they cannot be reused.
      • AWS STS are features of customers AWS account offered at no additional charge. However customers will bare that are charged they access other AWS services using your IAM users or AWS STS temporary security credentials.

      Granular Permissions:- Granular permission enables customers to grant the permissions for different according to their resources. Customers can give the whole access to AWS services, while limiting the other users to read-only access along with the administrator EC2 instances in order to access the process of billing information. These services include;

      Amazon Elastic Compute Cloud (Amazon EC2), Amazon Simple Storage Service (Amazon S3), Amazon DynamoDB, Amazon Redshift.

      For other users, customers can allow

      • Read-only access to just some S3 buckets,
      • Permission to administer just some EC2 instances, or
      • Access to customer billing information but nothing else.
      • IAM also enables customers to add specific conditions such as time of day to control how a user can use AWS,
      • Their originating IP address, whether they are using SSL, or
      • Whether customers have authenticated with a multi-factor authentication device.
    • What are the important components of IAM?

      Principal

      An entity in AWS that can perform actions and access resources. A principal can be an AWS account root user, an IAM user, or a role. You can grant permissions to access a resource in one of two ways:

      Principal element in a policy to specify the principal that is allowed or denied access to a resource. AWS customers cannot use the Principal element in an IAM identity-based policy.  They can use it in the trust policies for IAM roles and in resource-based policies. Resource-based policies are policies that they embed directly in an IAM resource. For example, they can embed policies in an Amazon S3 bucket or an AWS KMS customer master key (CMK).

      AWS customers can specify any of the following principals in a policy:

      • AWS account and root user
      • IAM users
      • Federated users (using web identity or SAML federation)
      • IAM roles
      • Assumed-role sessions
      • AWS services
      • Anonymous users (not recommended)

      Use the Principal element in these ways:

      • In IAM roles, use the Principal element in the role’s trust policy to specify who can assume the role. For cross-account access, the customer must specify the 12-digit identifier of the trusted account.
         
      • In resource-based policies, use the Principal element to specify the accounts or users who are allowed to access the resource.

      Request

      A Request is a process where a principal send to AWS in order to use the AWS Management Console, the AWS API, or the AWS CLI. The request includes:

      • Actions or operations – The actions or operations that the principal wants to perform.
      • Resources – The AWS resource object upon which the actions or operations are performed.
      • Principal – The person or application that used an entity (user or role) to send the request. Information about the principal includes the policies that are associated with the entity that the principal used to sign in.
      • Environment data – Information about the IP address, user agent, SSL enabled status, or the time of day.
      • Resource data – Data related to the resource that is being requested. This can include information such as a DynamoDB table name or a tag on an Amazon EC2 instance.

      AWS gathers the Request information into a request context, which is used to evaluate and authorize the request.

      AUTHORIZATION

      Authorization is a process of specifying exactly what actions a principal can and cannot perform in AWS resources. This will be possible after IAM has authenticated the principal, then IAM must manage the access of that principal to protect the client AWS infrastructure. Authorization is handled in IAM by defining specific privileges in policies and associating those policies with principals.

      A policy is a JSON document that fully defines a set of permissions to access and manipulate AWS resources. JSON (JavaScript Object Notation) is a lightweight data-interchange format. It is easy for humans to read and write and it is also easy for machines to parse and generate. Policy documents contain one or more permissions, with each permission defining:

      • Effect:– Allow or Deny.
      • Service:– Most AWS Cloud services support granting access through IAM, including IAM itself.
      • Resource:– The resource value specifies the specific AWS infrastructure for which this permission applies.
      • Action:– Action value specifies the subset of actions within a service that the permission allows or denies.
      • Condition:–The condition value optionally defines one or more additional restrictions that limit the actions allowed by the permission.

      RESOURCES

      After AWS approves the operations of customers’ request, they can be performed on the related resources within their account. A resource is an object that exists within a service. the resources include an Amazon EC2 instance, an IAM user, and an Amazon S3 bucket. The service defines a set of actions that can be performed on each resource.

      AUTHENTICATION

      A principal must be authenticated (signed in to AWS) using their credentials to send a request to AWS.

      To authenticate from the console as a root user, customers need to sign in with their email address and password.

      • IAM provide customers their account ID or alias, and then their users name and password.
      • Principal can be authenticated three ways :
        • By using User name and Password
        • By using access key, that’s a combination of an access key ID (20 characters) and an access secret key (40 characters).
        • Access Key/Session Token—When a process operates under an assumed role, the temporary security token provides an access key for authentication.

      OPERATIONS

      After the request has been authenticated and authorized, AWS approves the actions or operations in customers’ request. Operations are defined by a service, and include things that the customer can do to a resource, such as viewing, creating, editing, and deleting that resource. IAM supports approximately 40 actions for a user resource, including the following actions:

      • CreateUser
      • DeleteUser
      • GetUser
      • UpdateUser

      To allow a principal to perform an operation, you must include the necessary actions in a policy that applies to the principal or the affected resource

      Operations (Actions) are defined by a service that include things such as viewing, creating, editing, and deleting that resource by customers. In order to get granted in these Operations, Principals (Root user,IAM user, and Role) request need to pass Authentication and Authorization.

    • What is IAM User in AWS?

      IAM user

      An IAM user is an entity that customers create in AWS. The IAM user represents the person or service who uses the IAM user to interact with AWS. A primary use for IAM users is to give people the ability to sign in to the AWS Management Console for interactive tasks and to make programmatic requests to AWS services using the API or CLI. A user in AWS consists of a name, a password to sign into the AWS Management Console, and up to two access keys that can be used with the API or CLI. IAM user accounts are user accounts which customers can create for individual services offered by AWS.

      Root Users can create IAM, and assign them individual security credentials such as words, access keys, passwords, and multi-factor authentication devices, or request temporary security credentials to provide users access to AWS services and resources.

      • IAM user represents the person or service who uses the IAM user to interact with AWS.
      • A primary use for IAM users is to give people the ability to sign in to the AWS Management Console for interactive tasks and to make programmatic requests to AWS services using the API or CLI.
      • Root users can create IAM users, attach group level policies or user level policies and share these IAM accounts with other entities.
        • Group level and user level policies restrict and authorize individual IAM users to AWS services under Root user account.
      • IAM users are individuals who have been granted access to an AWS account. Each IAM user has three main components:
        • A user-name.
        • A password.
        • Permissions to access various resources.
      • Customers have persistent identities set up through the IAM service to represent individual people or applications.

      power user

      The description of power user access given by AWS is “Provides full access to AWS services and resources, but does not allow management of Users and groups.” The power to manage user is the highest privilege operation in AWS thus it is provided to the administrative access policy only.

      • Power users are just below the Root user and have all the privileges the Root user has with the exception of the ability to manage the IAM users.

      Roles and Temporary Security Tokens

      AWS IAM role is same as the user in which AWS identity with certain permission policies to determine specific identity that can or cannot be done with AWS. One can also use similar roles to delegate certain access to the users, applications or else services to have access to AWS resources.

      • Roles are used to grant specific privileges to specific entities for a set duration of time. These entities can be authenticated by AWS.
        • AWS provides these entities with a temporary security token from the AWS Security Token Service (STS), which lifespan run between 5 min to 36 hours.
        • Customers can create roles in IAM and manage permissions to control which operations can be performed by the entity.
      • Customers can also define which entity is allowed to assume the role. In addition, they can use service-linked roles to delegate permissions to AWS services that create and manage AWS resources on your behalf.
      • Granting permissions to users from other AWS accounts, whether you control those accounts or not known as Cross-Account Access
      • IAM users can temporarily assume a role to take on permissions for a specific task.
        • Temporary credentials are primarily used with IAM roles and automatically expire.
      • A role can be assigned to a federated user who signs in using an external identity provider.
      • IAM roles can be used for granting applications running on EC2 instances permissions to AWS API requests using instance profiles.
        • Only one role can be assigned to an EC2 instance at a time.
        • Using IAM roles for Amazon EC2 removes the need to store AWS credentials in a configuration
      • IAM Role Delegation has two policies:
        • Permissions policy – grants the user of the role the required permissions on a resource.
        • Trust policy – specifies the trusted accounts that are allowed to assume the role.

      An IAM role is very similar to a user, in that it is an identity with permission policies that determine what the identity can and cannot do in AWS. However, a role does not have any credentials (password or access keys) associated with it. Instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it. An IAM user can assume a role to temporarily take on different permissions for a specific task. A role can be assigned to a federated user who signs in by using an external identity provider instead of IAM. AWS uses details passed by the identity provider to determine which role is mapped to the federated user.

      Groups

      An IAM group is a collection of IAM users. Groups let Root users  specify permissions for multiple users, which can make it easier to manage the permissions for those users. Customers can use groups to specify permissions for a collection of users, which can make those permissions easier to manage for those users. Any user in that group automatically has the permissions that are assigned to the group.

      • A group is not an identity and cannot be identified as a principal in an IAM policy.
      • Groups are collections of users and have policies attached to them.(admin, developers, human resources…)
      • A group can contain many users, and a user can belong to multiple groups.
      • Groups can’t be nested; they can contain only users, not other groups.
      • Use the principal of least privilege when assigning permissions.
        • Customers cannot nest groups (groups within groups).

      The “identity” aspect of AWS Identity and Access Management (IAM) helps customers with the question “Who is that user?”, often referred to as authentication. Instead of sharing their root user credentials with others, they can create individual IAM users within their account that correspond to users in their organization. IAM users are not separate accounts; they are users within customers’ accounts. Each user can have its own password for access to the AWS Management Console.

      Using the following elements, IAM provides the infrastructure necessary to control authentication and authorization for customers’ accounts. They are Principal, Request, Authentication, Authorization, Actions (Operations), and Resource.

      Principal

      A principal is a an IAM entity or application that is allowed to interact with AWS resources, or that can make a request for an action or operation on an AWS resource. The principal is authenticated as the AWS account root user. A principal can be permanent or temporary, and it can represent a human or an application. The administrative IAM user is the first principle, which can allow the user for the particular services in order to assume a role.

      • There are three types of principals: root users, IAM users, and roles/temporary security tokens.

      Root User

      The username and password customers used to create an AWS account for the first time is called root user account. This account contains one important right that no other account created under IAM will have – the right to delete the entire AWS account including all storage, all EC2 instances, containers and everything else for that matter.

      • The account root user credentials are the email address used to create an account and a password. The root account has full administrative permissions and it cannot be restricted.
      • It’s AWS recommendation that customers  not to use the root user for their everyday tasks.
      • Best practice for root accounts:
        • Don’t use the root user credentials.
        • Don’t share the root user credentials.
        • Create an IAM user and assign administrative permissions as required.
        • Enable MFA
    • What Is Amazon Cognito?

      Amazon Cognito provides authentication, authorization, and user management for customers web and mobile apps. AWS customers users can sign in directly with a user name and password, or through a third party such as Facebook, Amazon, Google or Apple.

      The two main components of Amazon Cognito are user pools and identity pools. User pools are user directories that provide sign-up and sign-in options for your app users. Identity pools enable customers to grant their users access to other AWS services. AWS customers can use identity pools and user pools separately or together.

      User pools

      A user pool is a user directory in Amazon Cognito. With a user pool, Aws clients users can sign in to their web or mobile app through Amazon Cognito, or federate through a third-party identity provider (IdP). Whether their users sign in directly or through a third party, all members of the user pool have a directory profile that they can access through an SDK.

      User pools provide:

      • Sign-up and sign-in services.
      • A built-in, customizable web UI to sign in users.
      • Social sign-in with Facebook, Google, Login with Amazon, and Sign in with Apple, and through SAML and OIDC identity providers from clients user pool.
      • User directory management and user profiles.
      • Security features such as multi-factor authentication (MFA), checks for compromised credentials, account takeover protection, and phone and email verification.
      • Customized workflows and user migration through AWS Lambda triggers.

      Identity pools

      With an identity pool, AWS client users can obtain temporary AWS credentials to access AWS services, such as Amazon S3 and DynamoDB. Identity pools support anonymous guest users, as well as the following identity providers that they can use to authenticate users for identity pools:

      • Amazon Cognito user pools
      • Social sign-in with Facebook, Google, Login with Amazon, and Sign in with Apple
      • OpenID Connect (OIDC) providers
      • SAML identity providers
      • Developer authenticated identities

      To save user profile information, AWS customers identity pool needs to be integrated with a user pool.

      The two main components of Amazon Cognito are user pools and identity pools. User pools are user directories that provide sign-up and sign-in options for customers web and mobile app users. Identity pools provide AWS credentials to grant their users access to other AWS services.

      A user pool is a user directory in Amazon Cognito. Customers app users can sign in either directly through a user pool, or federate through a third-party identity provider (IdP). The user pool manages the overhead of handling the tokens that are returned from social sign-in through Facebook, Google, Amazon, and Apple, and from OpenID Connect (OIDC) and SAML IdPs. Whether your users sign in directly or through a third party, all members of the user pool have a directory profile that you can access through an SDK.

      With an identity pool, your users can obtain temporary AWS credentials to access AWS services, such as Amazon S3 and DynamoDB. Identity pools support anonymous guest users, as well as federation through third-party IdPs.

    • What is Amazon Fargate?

      AWS Fargate is a technology that AWS customers can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances. With Fargate, they no longer have to provision, configure, or scale clusters of virtual machines to run containers. This removes the need to choose server types, decide when to scale their clusters, or optimize cluster packing.

      When customers run their Amazon ECS tasks and services with the Fargate launch type or a Fargate capacity provider, the package they application in containers, specify the CPU and memory requirements, define networking and IAM policies, and launch the application. Each Fargate task has its own isolation boundary and does not share the underlying kernel, CPU resources, memory resources, or elastic network interface with another task.

      Benefits of Amazon Fargate

      • With Fargate, customer can focus on building and operating your applications whether they are running it with ECS or EKS. They only interact with and pay for your containers, and avoid the operational overhead of scaling, patching, securing, and managing servers. Fargate ensures that the infrastructure customers containers run on is always up-to-date with the required patches.
      • Fargate launches and scales the compute to closely match the resource requirements customers specify for the container. With Fargate, there is no over-provisioning and paying for additional servers. Customers can also get Spot and Compute Savings Plan pricing options with Fargate just like with Amazon EC2 instances. Compared to On-Demand prices, Fargate Spot provides up to 70% discount for interrupt-tolerant applications, and Compute Savings Plan offers up to 50% discount on committed spend for persistent workloads.
      • Individual ECS tasks or EKS pods each run in their own dedicated kernel runtime environment and do not share CPU, memory, storage, or network resources with other tasks and pods. This ensures workload isolation and improved security for each task or pod.
      • With Fargate, customers get out-of-box observability through built-in integrations with other AWS services including Amazon CloudWatch Container Insights. Fargate allows them to gather metrics and logs for monitoring their applications through an extensive selection of third party tools with open interfaces.
    • What are aws IAM best practices?

      Lock away the AWS root user access keys:- The access key for customers AWS account root user gives full access to all their resources for all AWS services, including customers’ billing information. Its important not to share AWS account root user password or access keys with anyone

      Use groups to assign permissions to IAM users:- Instead of defining permissions for individual IAM users, it’s usually more convenient to create groups that relate to job functions (administrators, developers, accounting, etc.). Next, define the relevant permissions for each group. Finally, assign IAM users to those groups. All the users in an IAM group inherit the permissions assigned to the group. That way, you can make changes for everyone in a group in just one place. As people move around in AWS clients company, they can simply change what IAM group their IAM user belongs to.

      Use Access Levels to Review IAM Permissions:- To improve the security of customers AWS account, theyshould regularly review and monitor each of their IAM policies. Make sure that the policies grant the least privilege that is needed to perform only the necessary actions.

      • When AWS customers review a policy, they can view the policy summary that includes a summary of the access level for each service within that policy. AWS categorizes each service action into one of five(List, Read, Write, Permissions management, or Tagging) access levels based on what each action does.

      Use Roles to Delegate Permissions:- Don’t share security credentials between accounts to allow users from another AWS account to access resources in your AWS account. Instead, use IAM roles. Customers can define a role that specifies what permissions the IAM users in the other account are allowed. They can also designate which AWS accounts have the IAM users that are allowed to assume the role.

      Rotate Credentials Regularly:- It is important to change the root passwords and access keys regularly, and make sure that all IAM users in the account do as well. That way, if a password or access key is compromised without Principal knowledge, they can limit how long the credentials can be used to access their resources. They can apply a password policy to their account to require all their IAM users to rotate their passwords. Customers can also choose how often they must do so.

      Monitor Activity the AWS Account:- By using logging features in AWS customers can determine the actions users have taken in their account and the resources that were used. The log files show the time and date of actions, the source IP for an action, which actions failed due to inadequate permissions, and more.

      Create individual IAM users:- Its important not to use AWS account root user credentials to access AWS. Instead, create individual users for anyone who needs access to the AWS account.

      Grant Least Privilege:- When creating IAM policies, it is important to follow the standard security advice of granting least privilege, or granting only the permissions required to perform a task. Determine what users (and roles) need to do and then craft policies that allow them to perform only those tasks.

      • Start with a minimum set of permissions and grant additional permissions as necessary. Doing so is more
        secure than starting with permissions that are too lenient and then trying to tighten them later.

      Configure a Strong Password Policy for the Users:- When letting the users to change their own passwords, they should be required to create strong passwords and that they rotate their passwords periodically. On the Account Settings page of the IAM console, they can create a password policy for their account. Customers can use the password policy to define password requirements, such as minimum length, whether it requires non-alphabetic characters, how frequently it must be rotated, and so on.

      Enable MFA for privileged users:- For extra security, we recommend that customers to require multi-factor authentication (MFA) for all users in their account. With MFA, users have a device that generates a response to an authentication challenge. Both the user’s credentials and the device-generated response are required to complete the sign-in process. If a user’s password or access keys are compromised, customers account resources are still secure because of the additional authentication requirement.

      Do Not Share Access Keys:- Access keys provide programmatic access to AWS. Do not embed access keys within unencrypted code or share these security credentials between users in your AWS account. For applications that need access to AWS, configure the program to retrieve temporary security credentials using an IAM role. To allow customers users for individual programmatic access, create an IAM user with personal access keys.

      Remove Unnecessary Credentials :- Remove IAM user credentials (passwords and access keys) that are not needed. Passwords and access keys that have not been used recently might be good candidates for removal. Customers can find unused passwords or access keys using the console, using the CLI or API, or by downloading the credentials report.

    Amazing Time Lapse

    Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable (scalable) compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. It is the central part of Amazon’s cloud-computing platform known as Amazon Web Services (AWS). Unlike traditional data centers, which lease physical resources, Amazon EC2 clouds lease virtualized resources which are mapped and run transparently to the client by the cloud’s virtualization middleware called ‘Xen’. EC2 is an IaaS cloud computing service that opens Amazon’s large computing infrastructure to its clients. The service is elastic in the sense that it enables customers to increase or decrease its infrastructure by launching or terminating new virtual machines known as instances.

    • Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing customers to quickly scale capacity, both up and down, as their computing requirements change.
    • The AWS Nitro System is the underlying platform for AWS next generation of EC2 instances that offloads many of the traditional virtualization functions to dedicated hardware and software to deliver high performance, high availability, and high security while also reducing virtualization overhead. 
    • Customers have complete control over the type of storage they want to use, the network configurations, the security configuration
    Load More
    © 21Clouding 2021
    • contact us!
    • Privacy
    • Terms
    en English
    ar Arabichy Armenianzh-CN Chinese (Simplified)nl Dutchen Englishfr Frenchde Germanit Italianpt Portugueseru Russianes Spanish