We have prepared an excellent course on Amazon Web Services (AWS) interview preparation on Udemy.com.
AWS Interview Preparation Course on Udemy.com
Amazon Web Service (AWS) is one of the fastest growing field in technology world. This course is designed to help you achieve your goals in AWS field. Engineers with AWS knowledge may get more salary than others with similar qualifications without AWS knowledge.
In this course, you will learn how to apply AWS technology in Software Design and Development. I will explain you what are the tools in AWS to build a highly scalable, resilient system.
What will I learn in this course?
You will also learn the latest cloud architecture in this course.
Finally, the biggest benefit of this course is that you will be able to demand higher salary in your next job interview.
It is good to learn AWS for theoretical benefits. But if you do not know how to handle interview questions on AWS, you can not convert your AWS knowledge into higher salary.
We cover a wide range of topics in this course. We have questions on Simple Storage Service S3, Elastic Compute Cloud EC2, Elastic Load Balancing ELB, CloudFront, DynamoDB, CloudWatch, ElastiCache and Lambda.
From time to time, we keep adding more topics to this course. Our aim is to keep you always updated with the latest interview questions in AWS.
What are the requirements?
- Basic Knowledge of popular AWS products
- Familiar with Software design concepts
What am I going to get from this course?
- Confidently handle AWS Technical Interview Questions
- Learn Best practices of AWS products
- Apply for AWS positions in technology
- Gain deep knowledge of AWS design concepts
- Demand higher salary for AWS jobs
What is the target audience?
- Software Engineer
- Software Architect
- Development Manager
- DevOps Engineer
- QA Engineer
- Anyone applying for AWS Jobs
View the AWS Interview Preparation course on Udemy
Amazon Kinesis Streams helps in creating applications that deal with streaming data. Kinesis streams can work with data streams up to terabytes per hour rate. Kinesis streams can handle data from thousands of sources. We can also use Kinesis to produce data for use by other Amazon services. Some of the main use cases for Amazon Kinesis Streams are as follows:
- Real-time Analytics: At times for real-time events like-Big Friday sale or a major game event, we get a large amount of data in a short period of time. Amazon Kinesis Streams can be used to perform real time analysis on this data, and make use of this analysis very quickly. Prior to Kinesis, this kind of analysis would take days. Whereas now within a few minutes we can start using the results of this analysis.
- Gaming Data: In online applications, thousands of users play and generate a large amount of data. With Kinesis, we can use the streams of data generated by an online game and use it to implement dynamic features based on the actions and behavior of players.
- Log and Event Data: We can use Amazon Kinesis to process the large amount of Log data that is generated by different devices. We can build live dashboards, alarms, triggers based on this streaming data by using Amazon Kinesis.
- Mobile Applications: In Mobile applications, there is wide variety of data available due to the large number of parameters like- location of mobile, type of device, time of the day etc. We can use Amazon Kinesis Streams to process the data generated by a Mobile App. The output of such processing can be used by the same Mobile App to enhance user experience in real time.
Amazon SQS stands for Simple Queue Service. Whereas, Amazon SNS stands for Simple Notification Service.
SQS is used for implementing Messaging Queue solutions in an application. We can decouple the applications in cloud by using SQS. Since all the messages are stored redundantly in SQS, it minimizes the chance of losing any message.
SNS is used for implementing Push notifications to a large number of users. With SNS we can deliver messages to Amazon SQS, AWS Lambda or any HTTP endpoint. Amazon SNS is widely used in sending messages to mobile devices as well. It can even send SMS messages to cell phones.
Amazon DynamoDB supports both document as well as key based NoSQL databases. Due to this APIs in DynamoDB are generic enough to serve both the types.
Some of the main APIs available in DynamoDB are as follows:
Amazon DynamoDB is used for storing structured data. The data in DynamoDB is also indexed by a primary key for fast access. Reads and writes in DynamoDB have very low latency due to the use SSD.
Amazon S3 is mainly used for storing unstructured binary large objects based data. It does not have a fast index like DynamoDB. So we should use Amazon S3 for storing objects with infrequent access requirements.
Another consideration is size of the data. In DynamoDB the size of an item can be maximum 400 kilobytes. Whereas Amazon S3 supports size as large as 5 terabytes for an object.
Therefore, DynamoDB is more suitable for storing small objects with frequent access and S3 is ideal for storing very large objects with infrequent access.
Amazon ElastiCache is mainly used for improving the performance of web applications by caching the information that is frequently accessed. ElastiCache webservice provides very fast access to the information by using in-memory caching.
Behind the scenes, ElastiCache supports open source caching platforms like-Memcached and Redis.
We do not have to manage separate caching servers with ElastiCache. We can just add critical pieces of data in ElastiCache to provide very low latency access to applications that need this data very frequently.
The basic Data Model in Amazon DynamoDB consists of following components:
Table: In DynamoDB, a Table is collection of data items. It is similar to a table in a Relational Database. There can be infinite number of items in a Table. There has to be one Primary key in a Table.
<li><strong>Item</strong>: An Item in DynamoDB is made up of a primary key or composite key and a variable number of attributes. The number of attributes in an Item is not bounded by a limit. But total size of an Item can be maximum 400 kilobytes.</li>
<li><strong>Attribute</strong>: In DynamoDB, we can associate an Attribute with an Item. We can set a name as well as one or more values in an Attribute. Total size of data in an Attribute is maximum 400 kilobytes.</li>
In AWS, we can create applications based on AWS Lambda. These applications are composed of functions that are triggered by an event. These functions are executed by AWS in cloud. But we do not have to specify/buy any instances or server for running these functions. An application created on AWS Lambda is called Serverless application in AWS.
We can use AWS Serverless Application Model (AWS SAM) to deploy and run a serverless application. AWS SAM is not a server or software. It is just a specification that has to be followed for creating a serverless application.
Once we create our serverless application, we can use CodePipeline to release and deploy it in AWS. CodePipeline is built on Continuous Integration Continuous Deployment (CI/CD) concept.
AWS Lambda is a service from Amazon to run a specific piece of code in Amazon cloud, without provisioning any server. So there is no effort involved in administration of servers.
In AWS Lambda, we are not charged until our code starts running. Therefore, it is very cost effective solution to run code.
AWS Lambda can automatically scale our application when the number of requests to run the code increases. So we do not have to worry about scalability of application to use AWS Lambda.