Since its launch in 2006, Amazon Web Services (AWS) has become the leading provider of on-demand cloud computing services. According to statistics, it held 32% of the global cloud computing market share through the end of 2018. Several Fortune 500 companies and institutions - such as NASA, General Electrics, Netflix, IMDb, and AirBnB use AWS cloud computing services extensively. AWS is the market leader growing at a breakneck pace.
AWS is a must-have for every developer looking to make it big in the cloud computing sphere. If you're looking for a job as an AWS Developer, these most commonly asked AWS interview questions will help you get there. I have divided these 50 AWS interview questions into two categories - Beginners and Experienced. Below you will find 50 AWS interview questions categorized into beginner and experienced categories.
There is no better time than right now to learn AWS. Learn AWS with the best AWS tutorials.
AWS Interview Questions For Beginners
1. What is AWS?
AWS stands for Amazon Web Services. It is a platform that offers users safe cloud services, data storage facilities, computing platforms, content delivery, and several other associated services.
2. What are the key components of AWS?
Answer: AWS consists of the following components:
- Route 53: DNS Web Service
- Simple E-mail Service: It allows sending email via a RESTFUL API call or standard SMTP protocol.
- Identity and Access Management: It offers enhanced security and identity management for your AWS account.
- Simple Storage Device or (S3): It is a storage device and the most commonly used AWS service.
- Elastic Compute Cloud (EC2):For hosting applications, it provides on-demand computing resources. Workloads can be unpredictable, so this is helpful.
- Elastic Block Store (EBS): Provides persistent storage volumes that attach to EC2 so that data can persist after a single Amazon EC2 instance has ended.
- CloudWatch: It allows administrators to collect and view keys for monitoring AWS resources. Furthermore, one can set up an alarm in case of any trouble.
3. What are the various AWS products built for offering cloud services?
Answer: There are primarily three types of cloud services for which AWS products are made. Accordingly, they are as follows:
- Computing: AWS offers Auto-Scaling, EC2, Lightsat, Elastic Beanstalk, and Lambda for computing.
- Storage: AWS offers S3, Elastic File System, Elastic Block Storage, and Glacier for storage.
- Networking: AWS networking products include VPC, Route53, and Amazon CloudFront.
4. What is Auto Scaling in AWS?
Answer: The Auto-Scaling function of AWS allows for the provision and launch of new instances when the demand is recognized. As a result, the user can increase or decrease the resource capacity according to changing demand levels.
5. How does the region differ from the availability zone?
Answer: The availability zone differs from the region. Regions are geographical locations such as the United States-West. For example. North California, Asia South, Mumbai. Meanwhile, the Availability zone is the part of the region that consists mainly of isolated areas that can replicate themselves when the need arises.
6. What do you understand by Geo-Targeting in CloudFront?
Answer: With Geo-Targeting, CloudFront enables the creation of customized content according to the needs and demands of specific geographical areas. In this way, businesses can show their personalized content to different audiences in different locations without changing their URLs.
7. Which are tools available in AWS that could help you recognize that you are overpaying for AWS?
Answer: Four tools are available in AWS to help you determine whether you are paying more than necessary for AWS. They are as follows.
- AWS Budgets
- Cost Explorer
- Cost allocation tags
- Checking the top service table.
8. What are the steps involved in CloudFront?
Answer: CloudFront involves four steps. The following are among them:
- Step 1: Create a CloudFormation template in YAML or JSON.
- Step 2: Save the code to an S3 bucket so it will serve as the repository.
- Step 3: AWS CloudFormation is used to call the bucket and subsequently create a new stack on the template.
- Step 4: CloudFormation reads the file and determines the services that are called, their order details, relationships with other services, and associated provisions.
9. What is S3 in AWS?
Answer: S3 stands for Simple Storage Service. It allows you to store and retrieve data from anywhere in the world at any time from the web. Payment for this service is based on a "Pay As You Go" model.
10. What is AMI?
Answer: AMI stands for Amazon Machine Image. The template provides the information needed to launch a copy of AMI running as a virtual server in the cloud. There are three types of information available: operating system, applications, and the application server itself. As per your instructions, you can launch multiple instances simultaneously from different AMIs.
11. What is the relationship between AMI and Instance?
Answer: AMIs are used to launch instances. It's possible to launch multiple instances from one AMI. Instance types define the hardware of the host computer, including information about computers and their memory capabilities. When an instance is launched, it acts like a traditional host and can be interacted with as if it was any other computer.
12. Can we send a request to Amazon S3?
Answer: You can send a request to Amazon S3 by using the REST API or the AWS SDK wrapper libraries, which wrap the underlying Amazon S3 REST API.
13. What is EC2?
Answer: EC2 (Amazon Elastic Compute Cloud) is a web service that provides secure and scalable compute capacity in the cloud. For developers, it simplifies web-scale cloud computing. With Amazon EC2's Web service interface, you can configure and obtain capacity with minimal effort. The service lets you take complete control of your computing resources while running on Amazon's proven computing platform.
14. Can buckets be created in AWS accounts?
Answer: It is possible to create buckets in AWS accounts. In an AWS account, up to 100 buckets can be created by default.
15. Define T2 Instance?
Answer: T2 Instances are designed for moderate baseline performance and to have the capability of bursting into higher levels of performance under workload requirements.
16. What are the different types of Instances?
Answer: There are different types of instances:
- Accelerated Computing Instance.
- Memory-Optimized Instance.
- Storage Optimized Instance.
- Computer Optimized Instance.
- General Purpose Instance.
17. Can we create Elastic IPs in AWS?
Answer: With AWS, we can create Elastic IPs. Each AWS account is allowed a maximum of 5 VPC Elastic IP addresses.
18. Does Amazon VPC supports the property of broadcast or multicast?
Answer: Amazon VPC does not support the broadcast or multicast properties.
19. What is a default storage class in S3?
Answer: In S3, the default storage class is called Standard frequently accessed.
20. What is the full form of VPC? Explain VPC?
Answer: VPC stands for Virtual Private Cloud. Using VPC, you can customize how your network is configured. It acts as a logically isolated network from the other clouds. Users of VPCs can have their own IP address range, security groups, subnets, and internet gateways.
21. What are the edge locations in AWS?
Answer: AWS edge locations refer to the area where cached content is stored. The content automatically searches for the edge location in such a case when a user attempts to access any content.
22. What are the roles in AWS?
Answer: In AWS, roles are used to grant permissions to entities that can be trusted in the account. They function similarly to users and do not require a username and password to interact with other resources in AWS.
23. What is Redshift in AWS?
Answer: Redshift in AWS is a big data warehouse product with fast and robust capabilities for managing data warehouse services in the cloud.
24. What is a Snowball in AWS?
Answer: Snowballs are data transport options in AWS. Data is entered and exited from AWS using the source appliances. You can use Snowball for transferring large amounts of data from one place to another. Additionally, it reduces networking costs.
25. Define Subnet in AWS?
Answer: AWS describes subnets as large sections of IP addresses that are divided into chunks. Each VPC can have 200 subnets.
26. What is SimpleDB in AWS?
Answer: SimpleDB in AWS is a highly available NoSQL data store that offloads the work of administering databases. Data items are stored and queried via web services requests, and Amazon SimpleDB handles the rest.
27. What is SQL in AWS?
Answer: SQL stands for Simple Queues Services, which act as a mediator between two controllers and provide a distributed queueing service.
28. What is AWS Lambda?
Answer: AWS Lambda is a computing service that allows users to run code in the AWS cloud without maintaining servers.
29. What is Amazon ElasticCache?
Answer: ElasticCache is a web service offered by Amazon that facilitates the deployment, scaling, and storing of data in the cloud.
30. What is Amazon EMR?
Answer: Amazon EMR is a survived cluster stage that helps to understand the functions of different data structures before the intimation. Apache Hadoop, Apache Spark, Apache Hive, and various other components included in Amazon EMR. Using open-source designs, they assist with investigating large amounts of data, preparing data analysis goals, and marketing intelligence workloads.
31. Explain the difference between stopping and terminating an instance.
Answer: In an EC2 instance, both stopping and terminating are States.
- Stopping - When an instance is stopped, it performs a normal shutdown and transitions to the stopped state. It is possible to start the instance later and all the Amazon EBS volumes remain attached. In a stopped state, the instance does not incur additional instances hours on the clock.
- Terminating – When an instance is terminated, it performs a normal shutdown and transitions to the terminated state. Unless the deleteOnTermination attribute is set to false, the attached Amazon EBS volumes are deleted. As the instance itself is deleted, it cannot be restarted later.
32. What are the main differences between EC2 and S3?
Answer: Below are the main differences between EC2 and S3.
|Cloud web service||Data storage system|
|It is used for hosting the web application.||It is used for storing any quantity of data.|
|It functions as a huge computer machine.||It is a REST interface.|
|It can either run LINUX or Windows and handle PHP, Python, Apache, and various other databases.||It utilizes secure authentication keys such as HMAC-SHA1.|
AWS Interview Questions For Experienced
33. Which instance type does one need for deploying a 4 node cluster of Hadoop in AWS?
Answer: The c4.8xlarge instance should be used for the master machine, while the i2.large instance is suitable for the slave machine. You can also launch an Amazon EMR instance that automatically configures the servers.
When using the Amazon EMR instance, you do not need to install the Hadoop cluster or manually configure the instance. To process data in S3, simply dump it. It is picked up by EMR, processed, and then deposited back into S3.
34. How will you use the processor state control feature available on the c4.8xlarge instance?
Answer: The processor state control has two states, namely:
- C State – This represents the sleep state. Sleep states for a processor range from c0 to c6, with c6 representing the deepest level of sleep.
- P State – This symbolizes the performance state. The frequency can vary from p0 to p15, with p15 being the lowest possible frequency.
Processors have multiple cores, and each of them requires thermal headroom to perform at its peak. Thus, the cores need to be kept at an optimal temperature to perform at their best.
By putting a core into the sleep state, the overall temperature of the processor decreases. Thus, other cores can offer better performance. As a result, you can devise a strategy to gain a performance boost from the processor by putting some cores to sleep and others in a performance state.
It is possible to customize the C and P states of processors performance like the c4.8xlarge according to the workload.
35. List some of the best practices for enhancing security in Amazon EC2.
Answer: Here are some of the best practices for enhancing security in Amazon EC2:
- You should only allow trusted networks or hosts to access ports on your instance.
- Manage access to AWS resources with AWS Identity and Access Management (IAM)
- Disable password-based logins for instances launched from AMIs.
- Review rules frequently in security groups.
36. Can you differentiate between a Spot instance and an On-Demand instance?
Answer: Spot instances and on-demand instances are both pricing models. Spot instances allow customers to purchase compute capacity without committing upfront. Furthermore, the rates for a spot instance are usually lower than those for on-demand instances.
A spot instance's bidding price is called the spot price. Demand and supply for spot instances determine how much it fluctuates. The EC2 instance will automatically shut down if the spot price exceeds the customer's maximum specified price.
37. How will you speed up data transfer in Amazon Snowball?
Answer: In Amazon Snowball, data transfer can be enhanced by:
- Copying from different workstations to the same snowball.
- Creating small batch transfers or moving large files for reduced encryption overhead.
- Eliminating unnecessary hops
- Running multiple copies at once.
38. Is it possible to use Amazon S3 with EC2 instances? Please elaborate.
Answer: Yes, you can use Amazon S3 with EC2 instances. Instances with root devices backed by local instance storage can use it. Amazon offers tools to load AMIs into Amazon S3 and move them between Amazon S3 and Amazon EC2 instances.
Amazon S3 gives AWS developers access to the same highly reliable, fast, and scalable data storage infrastructure that Amazon uses to operate its global network of websites and services.
39. What is the difference between Amazon RDS and Amazon DynamoDB?
Answer: Using Amazon RDS, you can manage relational databases. It lets you automate many relational database-related operations, including backing up, patching, and upgrading. Also, it only deals with structured data.
On the other hand, Amazon DynamoDB is a NoSQL database service. It only deals with unstructured data, unlike Amazon RDS.
40. What happens to the backups and DB snapshots if a DB instance is deleted?
Answer: It is possible to create a final DB snapshot when deleting a DB instance. Later on, you can use it to restore the database. As soon as the instance is deleted, Amazon RDS retains the user-created DB snapshot plus other manually-created DB snapshots. With each instance, the automated backups are deleted as well.
41. What AWS services will you choose to collect and process eCommerce data for real-time analysis?
Answer: Since it will be an unstructured data set, DynamoDB will be ideal for collecting eCommerce data. You can use Amazon Redshift to analyze eCommerce data in real-time.
42. Can you explain how elasticity differs from scalability?
Answer: Elasticity refers to the capability of a system to withstand an increase in workload by adding hardware resources as the demand rises and reverting these resources when needed.
On the other hand, Scalability refers to a system's ability to increase its hardware resources in response to a surge in demand. You can achieve this either by increasing the hardware specs or by increasing the number of processing nodes.
43. How will you load data to Amazon Redshift from different data sources such as Amazon EC2, DynamoDB, and Amazon RDS?
Answer: There are two ways to load data into Amazon Redshift from different data sources, including:
- With AWS Data Pipeline, you can load data from a wide range of AWS data sources in a high-performance, fault-tolerant, and reliable manner. The user can specify a data source, perform required data transformations, and then run a pre-written import script to load data.
- Using the COPY command – Directly load data from Amazon DynamoDB, Amazon EMR, or another SSH-compatible host.
44. What do you understand by Connection draining?
Answer: Connection draining is liable for re-routing the traffic from instances that are either being updated or fails during a health check to other, available instances. This ELB service continuously monitors instance health.
45. Imagine a user has created an Auto Scaling group but the group fails to launch a single instance for nearly 24 hours due to some reason. In this case, what will happen to Auto Scaling?
Answer: As a result, Auto Scaling will suspend the scaling process. You can suspend and resume one or more Auto Scaling processes belonging to the Auto Scaling group with the Auto Scaling feature. When a web application needs to be investigated for a configuration issue or some other reason, the Auto Scaling feature can be extremely useful.
46. What are the ideal cases for using the Classic Load Balancer and the Application Load Balancer?
Answer: Classic Load Balancer is the best option for simple load balancing of traffic across multiple EC2 instances.
Alternatively, the Application Load Balancer is suitable for container-based or microservice architecture. It is either necessary to route traffic to different services or distribute load across multiple ports on the same Amazon EC2 instance.
47. How will you transfer an existing domain name registration to Amazon Route 53 without disrupting the extant web traffic?
Answer: Below are the steps to transfer an existing domain name registration to Amazon Route 53 without disrupting the extant web traffic.
- List the DNS records for the domain name. Most commonly, it is available as a zone file obtained from the DNS provider.
- Create a hosted zone for storing the DNS records for the domain name using the Route 53 Management Console or the simple web-services interface after receiving the DNS record data. As an optional step, you could update the nameservers associated with the hosted zone for the domain name.
- Follow the transfer process with the registrar with whom you registered the domain name. As soon as the registrar propagates the new name server delegations, DNS queries will start being answered.
48. Can you explain how the AWS Elastic Beanstalk applies updates?
Answer: AWS Elastic Beanstalk prepares a duplicate copy of the original instance before updating it. To prevent an update application failure, it routes traffic to the duplicate instance.
In the event of a failure, AWS Elastic Beanstalk will switch back to the original instance using the same duplicate copy it created before beginning the update process.
49. Please explain what happens if an application stops responding to requests in AWS Elastic Beanstalk.
Answer: Despite the underlying infrastructure appearing to be healthy, Beanstalk can detect if the application isn't responding on the custom link. The situation is then logged as an environmental event, which can then be inspected in detail and acted upon.
AWS Elastic Beanstalk apps have a built-in mechanism for avoiding underlying infrastructure failures. If an Amazon EC2 instance fails, Beanstalk auto-scales a new instance using the Auto Scaling feature.
50. What are Recovery Time Objective and Recovery Point Objective in AWS?
- Recovery Time Objective: Between the interruption of service and the restoration of service, it is the maximum acceptable delay. Essentially, the service is unavailable during an acceptable period of time.
- Recovery Point Objective: Until the last data restore point, the maximum period is acceptable. Between the last recovery point and interruption of service, it represents the acceptable amount of data loss.
If you have made it this far, then certainly you are willing to learn more about cloud computing. Here are some more resources related to cloud computing that we think will be useful to you.