Amazon AWS Tutorial

Amazon Web Services (AWS) helps in building and deploying highly scalable, fault-tolerant and secured application deployments. In this tutorial I will provide an overview of Amazon AWS and some commonly used AWS services. This tutorial is primarily meant for beginners or those who have a limited experience with AWS.

AWS Overview

When you think of AWS, think of legos for cloud. That’s right! Just like legos can be used to build different shapes, AWS services can be used to build different application deployments. Here are some key capabilities offered by AWS.

Amazon Web Services (AWS)

  • AWS is built on Cloud Computing model, which means, it offers you all the fundamental capabilities of cloud. Let’s look at some of the important ones.
    • On-demand resource provisioning: AWS allows you to provision resources on-demand based on your needs. So, you don’t need to create resources in advance or keep these lying around when not needed.
    • Pay-as-you-go model: You pay for the resource usage and not the underlying infrastructure.
    • Scalability: AWS allows you to scale your resources up or down (a.k.a. scaling or elasticity). You can also automate the scaling (a.k.a. auto scaling), such as, based on resource utilization or based on a schedule. For example, if the CPU exceeds 75% add another node and when it drops below 25%, take out one node.
  • AWS offers both – Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). IaaS lets you stand and manage a basic infrastructure resource, such as, a server or a load balancer. PaaS goes one level up in the stack where it allows you to stand and manage a platform on-demand, such as, a database.
  • Your AWS journey starts with creation of an AWS account. All the resources you create from there on belong to that account.
  • AWS offers a FREE tier to try out several AWS services. 
    https://aws.amazon.com/free/
  • AWS is available in multiple geographical regions. These are referred to as AWS Regions. Most AWS Services are region-based. That is, the resources managed by that service are in a specific region. For example, when you deploy a server using Amazon’s EC2 service, it is deployed in a specific region. However, certain AWS services are global, which makes it easy to configure these once and use it across all regions.
    • An AWS Region comprises one or more Availability Zones (AZs). The AZs are spread out within the geographic area of the region. The idea being if one of the AZ in a region goes down, such as, due to a natural disaster, the other AZ can still continue to provide services in the region. That’s why AZs are an essential aspect of high availability when it comes to application deployment and underlying resources.
  • AWS offers various capabilities to make your deployment more secured. This could be things like restricting which users have access to which AWS services, resources they can provision to other areas like restricting access to a specific port and so on. We will discuss this further in a later section.
  • AWS offers a centralized management console. You login to the main AWS console and from there you can connect to the respective AWS service console.
  • AWS provides a rich command line interface (AWS CLI) and AWS SDK for programmatic access.
  • AWS offers competitive pricing and multiple strategies. That is, in several cases, you may have more than one option to set up your deployment. And, each option may have a different cost associated. For example, you could leverage EC2 Spot Instances for non-time sensitive workloads instead of using on-demand EC2 instances.
  • AWS has a feature called as Consolidated Billing that helps organizations with multiple AWS accounts manage billing in a consolidated manner. This way, the individual accounts can continue to use the AWS features based on their needs. Only for billing these will roll into a consolidated account. Even with this, AWS provides capabilities to track resources and cost allocation to the various accounts.
  • AWS is a leader in the Cloud Computing space and they are continuously evolving! They constantly release newer services and add more useful features to existing services.

Commonly used AWS Services

Let’s talk about some commonly used AWS services now.

Identity and Access Management (IAM)

IAM is a global service that’s at the heart of any AWS deployment. Right from the point of signing up for an AWS account to adding more users and granting access, IAM plays a key role. Here are some key capabilities offered by IAM.

  • Manage authentication (both in-built offered by AWS as well as external authentication, such as, a corporate Active Directory integration)
  • Management of Users and Groups
  • Manage access control to AWS services and resources using IAM Policies

Virtual Private Cloud (VPC)

VPC is a region based service that provides an isolated infrastructure. Think of this infrastructure as a virtual data center that has it’s own resources that are completely separate from other VPC resources even if the other VPCs are in the same AWS account. VPC offers several capabilities that are the back bone of an application deployment and other AWS services rely upon these.

AWS VPC

  • VPC provides a complete network infrastructure (subnets, gateways, etc.).
  • The VPC console lets you manage network resources and configurations (such as, setting up route tables).
  • VPC also provides creation and management of advanced networking resources, such as, load balancers in a self-service manner. This is extremely useful for Architects and deployment admins to try out scenarios and tear down the infrastructure when done.
  • VPC offers multiple options to connect your corporate data center environment to AWS (such as, via a VPN).
  • VPC provides an advanced capability to enable private communication between VPCs a.k.a. VPC Peering. This makes the deployment more secured and it also boosts performance as the VPCs do not need to explicitly route their traffic via the Internet to talk to each other. An example of VPC Peering would be a VPC that is hosting a shared component like a database and another VPC that’s hosting the application stack. By peering these 2 VPCs, the application can talk to the database privately and the database port does not need to be exposed publicly.

Elastic Compute Cloud (EC2)

EC2 is a region based service that provides the compute infrastructure. Following are some key capabilities offered by EC2.

AWS EC2

  • Creation and management of servers (a.k.a. EC2 Instances).
    • An EC2 instance has an Instance Type associated, which controls the CPU and memory configuration. Once an instance has been created, you can change the Instance Type to a higher value or a lower value. This is also referred to as vertical scaling.
    • An Amazon Machine Image (AMI) is used to specify the operating system and any further stack to be installed at the time of creation.
    • An EC2 instances uses one or more Elastic Block Store (EBS) volumes as the filesystem.
  • Connecting to EC2 instance: EC2 supports shell access in case of linux/UNIX and RDP access in case of Windows.
  • Managing virtual firewalls (a.k.a. Security Groups) that control access to the ports on the EC2 instances.
  • Various lifecycle and other management actions, such as, start/stop instance, change security groups, etc.
  • Creation and management of load balancers and underlying configurations. For example, adding one or more EC2 instances to a load balancer.

Relational Database Service (RDS)

RDS allows creation and management of several popular databases in the cloud. A database instance is also referred to as an RDS Instance. As you know now, this is an example of a PaaS offering. Here are some of the key capabilities of RDS.

AWS RDS

  • The primary premise of RDS is AWS takes care of the traditional database setup and maintenance and you use it purely as a consumer. You don’t need to have database expertise in your team to set up and use RDS. At the same time, AWS does offer various capabilities that let you customize and fine tune the database instance to your needs.
  • RDS supports various popular database engines (like MySQL, Postgres, Oracle, etc.). AWS also offers their own database offering – Amazon Aurora, which is a high performance database built for cloud and is compatible with MySQL and Postgres.
  • It also supports multiple versions of the database engines so that you can choose the most compatible one.
  • RDS offers multiple licensing models where applicable (such as, BYOL (Bring-Your-Own-License) and “License Included”).
  • It supports both public and private deployments. In general, you should always prefer going with private deployment.
  • RDS offers high availability options by deploying an RDS Instance across multiple Availability Zones. That way, if one of the AZ goes down, the database will still be available via the other AZ.
  • AWS allows scheduling of RDS backup and maintenance and you can control these via the RDS Console.
  • An RDS Instance can be shared by multiple applications and services for cost saving purpose.

Simple Storage Service (S3)

S3 offers a highly available and cost-effective object storage service. Here are some key capabilities of S3.

AWS S3

  • S3 organizes objects into buckets. A bucket can be public or private.
  • It offers granular access control using bucket policies. For example, granting access to a private bucket to other AWS accounts, granting access to specific IP addresses, etc.
  • S3 provides additional capabilities like encryption, versioning and so on.
  • S3 offers a tiered storage model, which means you can choose a more cost optimal storage depending on your usecase. For example, for a less frequently access storage you can use different storage class vs regular storage. S3 provides lifecycle policies to automatically move objects to a more optimal storage based on age.

In this section we saw just a few of the many AWS services. But, just from these also you can see the value of AWS services and how these can help you set up a robust and scalable deployment.

Guidelines For Application Architects, Developers and DevOps

This section contains guidelines that Architects, Developers and DevOps team members may find particularly useful. These are high level guidelines to get you started.

Design Guidelines For Your Application Deployment In AWS

  • Evaluate your application design to reduce the application footprint and have appropriate services. For example, instead of having components that run background jobs as part of the core application tier itself, explore if these can be moved to a separate component/service that can be hosted on a separate node. There are several benefits of this.
    • It reduces the footprint of the application node.
    • It provides better decoupling between the application components.
    • Each of these components can then be scaled up/down based on the need. For example, if the background jobs need more capacity, then only that component needs to be scaled.
  • Leverage on-demand provisioning and see if you can use a more optimal and smaller resource configuration. As a general rule of thumb, try having more of smaller resources than one big monolith resource. That way, you can use scaling capabilities of AWS to add resources when needed and take these out when done. This can give you some significant cost savings.
  • Think of important application design aspects like multi-tenancy so that each tenant’s data can be segregated.

Keep a Cost Focus

Cost is an important consideration for any cloud deployment. Here are some basic guidelines for cost optimization.

  • Choose as small resource size as practical.
  • Minimize the time you keep resources around. When these are not needed, you can delete these.
  • Shutdown resources when these are not in use to further optimize cost.

Making Your Deployment More Secured

  • Use IAM policies to grant only the required access.
  • Minimize public facing resources.
  • Use Security Groups to restrict access to ports.

Making Highly Scalable and Fault Tolerant Deployments

  • Use load balancing capabilities to distribute load and provide high availability. For example, you can multiple nodes behind a load balancer, which will take care of distributing traffic across these. In the event one or more nodes go down the other nodes can continue to serve the application.
  • Use the multi-Availability Zone (a.k.a. multi-AZ) capabilities of AWS to distribute your components across multiple AZs. For example, if you have 3 web server nodes in 3 different AZs, even if 2 AZs go down for some reason, your application can still be available via the node in the third AZ. Many applications use at a minimum 2 AZ deployment for these reasons.

What Next?

  • If you are interested in enhancing your learning further
    • Take our FREE short course – Cloud Computing & Amazon Web Services (AWS) Overview, which provides a practical overview and step-by-step guide to get started with AWS.
    • For those of you who would like to get into the crux of application design, development and deployment, we have an advanced course – AWS Advanced For Developers, which gets into hands-on details of AWS services and various points covered in this tutorial.
  • You can find more information on AWS and their announcements @
    https://aws.amazon.com/new/

 

Hope you got to learn a bit about AWS. If you have any comments, please feel free to post them below.

Have a good day!
– Nitin

Enhance your AWS skills with these hands-on courses for real-world deployments.

Learn AWS basics for FREE.

Learn practical application development on AWS.

Leave a Reply

Your email address will not be published. Required fields are marked *