We have prepared an excellent course on Amazon Web Services (AWS) interview preparation on Udemy.com.
AWS Interview Preparation Course on Udemy.com
Amazon Web Service (AWS) is one of the fastest growing field in technology world. This course is designed to help you achieve your goals in AWS field. Engineers with AWS knowledge may get more salary than others with similar qualifications without AWS knowledge.
In this course, you will learn how to apply AWS technology in Software Design and Development. I will explain you what are the tools in AWS to build a highly scalable, resilient system.
What will I learn in this course?
You will also learn the latest cloud architecture in this course.
Finally, the biggest benefit of this course is that you will be able to demand higher salary in your next job interview.
It is good to learn AWS for theoretical benefits. But if you do not know how to handle interview questions on AWS, you can not convert your AWS knowledge into higher salary.
We cover a wide range of topics in this course. We have questions on Simple Storage Service S3, Elastic Compute Cloud EC2, Elastic Load Balancing ELB, CloudFront, DynamoDB, CloudWatch, ElastiCache and Lambda.
From time to time, we keep adding more topics to this course. Our aim is to keep you always updated with the latest interview questions in AWS.
What are the requirements?
- Basic Knowledge of popular AWS products
- Familiar with Software design concepts
What am I going to get from this course?
- Confidently handle AWS Technical Interview Questions
- Learn Best practices of AWS products
- Apply for AWS positions in technology
- Gain deep knowledge of AWS design concepts
- Demand higher salary for AWS jobs
What is the target audience?
- Software Engineer
- Software Architect
- Development Manager
- DevOps Engineer
- QA Engineer
- Anyone applying for AWS Jobs
View the AWS Interview Preparation course on Udemy
Some of the possible connection issues with EC2 instance are:
- Connection time out
- Permission denied due to host key not found
- Unprotected private key file
- Permission denied due to user key not recognized by server
- No supported authentication method available
- Server refused the key
An activity AWS Data Pipeline is an Action that is initiated as a part of the pipeline.
Some of the activities are:
- Elastic MapReduce (EMR)
- Hive jobs
- Data copies
- SQL queries
- Command-line scripts
In AWS Data Pipeline we can define a Schedule. The Schedule contains the information about when will pipeline activities run and with what frequency.
All schedules have a start date and a frequency.
E.g. One schedule can be run every day starting Mar 1, 2016, at 6am.
Schedules may also have an end date, after which the AWS Data Pipeline service will not execute any activity.
Apache Hadoop is the main framework behind Amazon EMR. It is a distributed data processing engine.
Hadoop is Open source Java based software framework. It supports data-intensive distributed applications running on large clusters of commodity hardware.
Hadoop is based on MapReduce algorithm in which data is divided into multiple small fragments of work. Each of these tasks can be executed on any node in the cluster.
In AWS EMR, Hadoop is run on the hardware provides by AWS cloud.
AWS EMR has following cluster states:
- STARTING – In this state, cluster provisions, starts, and configures EC2 instances
- BOOTSTRAPPING – In this state cluster is executing the Bootstrap process
- RUNNING – State in which cluster is currently being run
- WAITING – In this state cluster is currently active, but there are no steps to run
- TERMINATING – Shut down of cluster has started
- TERMINATED – The cluster is shut down without any error
- TERMINATED_WITH_ERRORS – The cluster is shut down with errors.
A Classic Load Balancer is used for simple load balancing of traffic across multiple EC2 instances.
An Application Load Balancer is more suited for Microservices based architecture or container-based architecture. Mainly in these architecture there is a need to do load balancing as well as there is need to route traffic to multiple services on same EC2 instance.
We can use following steps to scale an Amazon EC2 instance:
- Step 1: Start an EC2 instance that is larger in capacity than the one we are currently using.
- Step 2: Pause the new instance and detach the root web volume from the server.
- Step 3: Stop the current live instance and detach its root volume
- Step 4: Note the unique device ID and attach that root volume to new server
- Step 5: Start the new EC2 instance again
AWS Data pipeline is mainly used for data driven workflows that are popular in Big Data systems.
AWS Data pipeline can easily copy data between different data stores and it can execute data transformations. To create such data flows, little programming knowledge is required.
Amazon Simple Workflow Service (SWS) is mainly used for process automation. It can easily coordinate work across distributed application components.
We can do media processing, backend flows, analytics pipelines etc. with SWS. So it is not limited to just Data driven flows.
In AWS, every region is an independent environment. Within a Region there can be multiple Availability Zones.
Every Availability Zone is an isolated area. But there are low-latency links that connect one Availability Zone to another within a region.
An endpoint is just an entry point for a web service. It is written in a URL form.
E.g. https://dynamodb.us-east-2.amazonaws.com is an endpoint for Amazon DynamoDB service.
Most of the AWS services offer an option to select a regional endpoint for incoming requests. But many services in AWS do not support regions. E.g. IAM. So their endpoints do not have a region.