slide 1: How to Prepare for
AWS-DevOps
Certification
AWS DOP-C01 Certification Made Easy
with VMExam.com.
slide 2: DOP-C01 AWS-DevOps Certification Details
Exam Code DOP-C01
Full Exam Name AWS DevOps Engineer Professional
No. of Questions 77
Online Practice Exam AWS Certified DevOps Engineer - Professional Practice Test
Sample Questions AWS DOP-C01 Sample Questions
Passing Score 75
Time Limit 180 minutes
Exam Fees 300 USD
Become successful with VMExam.com
slide 3: AWS DOP-C01 Study Guide
• Perform enough practice with with related AWS-
DevOps certification on VMExam.com.
• Understand the Exam Topics very well.
• Identify your weak areas from practice test and do
more practice with VMExam.com.
Become successful with VMExam.com
slide 4: AWS-DevOps Certification Syllabus
Syllabus Topics Weight
SDLC Automation 22
Configuration Management and Infrastructure as Code 19
Monitoring and Logging 15
Policies and Standards Automation 10
Incident and Event Response 18
High Availability Fault Tolerance and Disaster Recovery 16
Become successful with VMExam.com
slide 5: AWS-DevOps Training Details
Training:
DevOps Engineering on AWS
Become successful with VMExam.com
slide 6: AWS DOP-C01
Sample Questions
Become successful with VMExam.com
slide 7: Que.01: As part of your continuous deployment process your application undergoes an
I/O load performance test before it is deployed to production using new AMIs. The
application uses one Amazon EBS PIOPS volume per instance and requires consistent I/O
performance.
Which of the following must be carried out to ensure that I/O load performance tests yield
the correct results in a repeatable manner
Options:
a Ensure that the I/O block sizes for the test are randomly selected.
b Ensure that the Amazon EBS volumes have been pre-warmed by reading all the blocks before the
test.
c Ensure that snapshots of the Amazon EBS volumes are created as a backup.
d Ensure that the Amazon EBS volume is encrypted.
e Ensure that the Amazon EBS volume has been pre-warmed by creating a snapshot of the volume
before the test.
Become successful with VMExam.com
slide 8: Answer
b Ensure that the Amazon
EBS volumes have been
pre-warmed by reading all
the blocks before the test.
Become successful with VMExam.com
slide 9: Que.02: Your team is responsible for an AWS Elastic Beanstalk application. The
business requires that you move to a continuous deployment model releasing
updates to the application multiple times per day with zero downtime.
What should you do to enable this and still be able to roll back almost
immediately in an emergency to the previous version
Options:
a Enable rolling updates in the Elastic Beanstalk environment setting an appropriate pause
time for application startup.
b Develop the application to poll for a new application version in your code repository
download and install to each running Elastic Beanstalk instance.
c Create a second Elastic Beanstalk environment running the new application version and
swap the environment CNAMEs.
d Create a second Elastic Beanstalk environment with the new application version and
configure the old environment to redirect clients using the HTTP 301 response code to the
new environment.
Become successful with VMExam.com
slide 10: Answer
c Create a second Elastic
Beanstalk environment
running the new application
version and swap the
environment CNAMEs.
Become successful with VMExam.com
slide 11: Que.03: Your application has a single Amazon EC2 instance that processes orders with a third-party
supplier. Orders are retrieved from an Amazon SQS queue and processed in batches every five
minutes.
There is a business requirement that delays in processing should be no more than one hour.
Approximately three times a week the application fails and orders stop being processed requiring a
manual restart.
Which steps should you take to make this more resilient in a cost-effective way
Choose 2
Options:
a Create a second ‘ w a tc h d o g ’ instance configured to monitor the processing instance and restart it if a failure is
detected.
b Create an Auto Scaling launch configuration to launch instances configured to perform processing.
Create an Auto Scaling group to use the launch configuration with a minimum and maximum of one.
c Create an Auto Scaling launch configuration to launch instances configured to perform processing.
Create an Auto Scaling group to use the launch configuration with a minimum of two and a maximum of ten and to
scale based on the size of the Amazon SQS queue.
d Create a load balancer and register your instance with Elastic Load Balancing. Set the Elastic Load Balancing
health check to call an HTTP endpoint in your application that executes the processing.
e Modify the processing application to send a custom CloudWatch metric with a dimension of InstanceId. Create a
CloudWatch alarm configured when the metric is in an Insufficient state for 10 minutes to take an Amazon EC2
action to terminate the instance.
Become successful with VMExam.com
slide 12: Answer
b Create an Auto Scaling launch configuration to
launch instances configured to perform
processing.
Create an Auto Scaling group to use the launch
configuration with a minimum and maximum of
one.
e Modify the processing application to send a
custom CloudWatch metric with a dimension of
InstanceId. Create a CloudWatch alarm
configured when the metric is in an Insufficient
state for 10 minutes to take an Amazon EC2
action to terminate the instance.
Become successful with VMExam.com
slide 13: Que.04: After reviewing the last quarter’s monthly bills management has noticed an increase in
the overall bill from Amazon. After researching this increase in cost you discovered that one of
your new services is doing a lot of GET Bucket API calls to Amazon S3 to build a metadata
cache of all objects in the applications bucket.
Your boss has asked you to come up with a new cost-effective way to help reduce the amount of
these new GET Bucket API calls. What process should you use to help mitigate the cost
Options:
a Update your Amazon S3 b u c k e t s ’ lifecycle policies to automatically push a list of objects to a new bucket and use this
list to view objects associated with the a p p l i c a t i o n ’ s bucket.
b Create a new DynamoDB table. Use the new DynamoDB table to store all metadata about all objects uploaded to
Amazon S3. Any time a new object is uploaded update the a p p l i c a t i o n ’ s internal Amazon S3 object metadata cache from
DynamoDB.
c Using Amazon SNS create a notification on any new Amazon S3 objects that automatically updates a new DynamoDB
table to store all metadata about the new object. Subscribe the application to the Amazon SNS topic to update its internal
Amazon S3 object metadata cache from the DynamoDB table.
d Upload all images to Amazon SQS set up SQS lifecycles to move all images to Amazon S3 and initiate an Amazon
SNS notification to your application to update the a p p l i c a t i o n ’ s internal Amazon S3 object metadata cache.
e Upload all images to an ElastiCache filecache server. Update your application to now read all file metadata from the
ElastiCache filecache server and configure the ElastiCache policies to push all files to Amazon S3 for long-term storage.
Become successful with VMExam.com
slide 14: Answer
c Using Amazon SNS create a
notification on any new Amazon S3
objects that automatically updates
a new DynamoDB table to store all
metadata about the new object.
Subscribe the application to the
Amazon SNS topic to update its
internal Amazon S3 object
metadata cache from the
DynamoDB table.
Become successful with VMExam.com
slide 15: Que.05: Your current log analysis application takes more than four hours to generate a
report of the top 10 users of your web application.
You have been asked to implement a system that can report this information in real
time ensure that the report is always up to date and handle increases in the number of
requests to your web application.
Choose the option that is cost-effective and can fulfill the requirements.
Options:
a Publish your data to CloudWatch Logs and configure your application to autoscale to handle the load on
demand.
b Publish your log data to an Amazon S3 bucket. Use AWS CloudFormation to create an Auto Scaling group
to scale your post-processing application which is configured to pull down your log files stored in Amazon S3.
c Configure an Auto Scaling group to increase the size of your Amazon EMR cluster.
d Post your log data to an Amazon Kinesis data stream and subscribe your log-processing application so
that is configured to process your logging data.
e Create a multi-AZ Amazon RDS MySQL cluster post the logging data to MySQL and run a map reduce
job to retrieve the required information on user counts.
Become successful with VMExam.com
slide 16: Answer
d Post your log data to an
Amazon Kinesis data stream
and subscribe your log-
processing application so that
is configured to process your
logging data.
Become successful with VMExam.com
slide 17: AWS-DevOps Certification Guide
• The AWS Certification is increasingly becoming
important for the career of employees.
• Try our AWS-DevOps mock test.
Become successful with VMExam.com
slide 18: More Info on AWS Certification
Visit www.vmexam.com
Become successful with VMExam.com