We provide real DOP-C01 exam questions and answers braindumps in two formats. Download PDF & Practice Tests. Pass Amazon-Web-Services DOP-C01 Exam quickly & easily. The DOP-C01 PDF type is available for reading and printing. You can print more and practice many times. With the help of our Amazon-Web-Services DOP-C01 dumps pdf and vce product and material, you can easily pass the DOP-C01 exam.
Free demo questions for Amazon-Web-Services DOP-C01 Exam Dumps Below:
NEW QUESTION 1
Your development team is using access keys to develop an application that has access to S3 and DynamoDB. A new security policy has outlined that the credentials should not be older than 2 months, and should be rotated. How can you achieve this
- A. Use the application to rotate the keys in every 2 months via the SDK
- B. Use a script which will query the date the keys are create
- C. If older than 2 months, delete them and recreate new keys
- D. Delete the user associated with the keys after every 2 month
- E. Then recreate the user again.D- Delete the I AM Role associated with the keys after every 2 month
- F. Then recreate the I AM Roleagain.
Answer: B
Explanation:
One can use the CLI command list-access-keys to get the access keys. This command also returns the "CreateDate" of the keys. If the CreateDate is older than 2 months, then the keys can be deleted.
The Returns list-access-keys CLI command returns information about the access key IDs associated with the specified I AM user. If there are none, the action returns
an empty list.
For more information on the CLI command, please refer to the below link: http://docs.aws.amazon.com/cli/latest/reference/iam/list-access-keys.html
NEW QUESTION 2
You need to deploy an AWS stack in a repeatable manner across multiple environments. You have selected CloudFormation as the right tool to accomplish this, but have found that there is a resource type you need to create and model, but is unsupported by CloudFormation. How should you overcome this challenge?
- A. Use a CloudFormation Custom Resource Template by selecting an API call to proxy for create, update, and delete action
- B. CloudFormation will use the AWS SDK, CLI, or API method of your choosing as the state transition function for the resource type you are modeling.
- C. Submit a ticket to the AWS Forum
- D. AWS extends CloudFormation Resource Types by releasing tooling to the AWS Labs organization on GitHu
- E. Their response time is usually 1 day, and theycomplete requests within a week or two.
- F. Instead of depending on CloudFormation, use Chef, Puppet, or Ansible to author Heat templates, which are declarative stack resource definitions that operate over the OpenStack hypervisor and cloud environment.
- G. Create a CloudFormation Custom Resource Type by implementing create, update, and delete functionality, either by subscribing a Custom Resource Provider to an SNS topic, or by implementing the logic in AWS Lambda.
Answer: D
Explanation:
Custom resources enable you to write custom provisioning logic in templates that AWS Cloud Formation runs anytime you create, update (if you changed the custom resource), or delete stacks. For example, you might want to include resources that aren't available as AWS Cloud Formation resource types. You can include those resources by using custom resources. That way you can still manage all your related resources in a single stack.
Use the AWS:: Cloud Formation:: Custom Resource or Custom ::String resource type to define custom resources in your templates. Custom resources require one property: the service token, which specifies where AWS CloudFormation sends requests to, such as an Amazon SNS topic.
For more information on Custom Resources in Cloudformation, please visit the below U RL: http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/template-custom- resources.html
NEW QUESTION 3
You need to run a very large batch data processingjob one time per day. The source data exists
entirely in S3, and the output of the processingjob should also be written to S3 when finished. If you need to version control this processingjob and all setup and teardown logic for the system, what approach should you use?.
- A. Model an AWSEMRjob in AWS Elastic Beanstalk.
- B. Model an AWSEMRjob in AWS CloudFormation.
- C. Model an AWS EMRjob in AWS OpsWorks.
- D. Model an AWS EMRjob in AWS CLI Composer.
Answer: B
Explanation:
With AWS Cloud Formation, you can update the properties for resources in your existing stacks.
These changes can range from simple configuration changes, such
as updating the alarm threshold on a Cloud Watch alarm, to more complex changes, such as updating the Amazon Machine Image (AMI) running on an Amazon EC2
instance. Many of the AWS resources in a template can be updated, and we continue to add support for more.
For more information on Cloudformation version control, please visit the below URL: http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/updating.stacks.wa I kthrough.htm I
NEW QUESTION 4
Which of the following are Lifecycle events available in Opswork? Choose 3 answers from the options below
- A. Setup
- B. Decommision
- C. Deploy
- D. Shutdown
Answer: ACD
Explanation:
Below is a snapshot of the Lifecycle events in Opswork.
For more information on Lifecycle events, please refer to the below URL:
• http://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-events.html
NEW QUESTION 5
You have an Auto Scaling group with an Elastic Load Balancer. You decide to suspend the Auto Scaling AddToLoadBalancer for a short period of time. What will happen to the instances launched during the suspension period?
- A. The instances will be registered with ELB once the process has resumed
- B. Auto Scaling will not launch the instances during this period because of the suspension
- C. The instances will not be registered with EL
- D. You must manually register when the process is resumed */
- E. It is not possible to suspend the AddToLoadBalancer process
Answer: C
Explanation:
If you suspend AddTo Load Balancer, Auto Scaling launches the instances but does not add them to the load balancer or target group. If you resume
the AddTo Load Balancer process. Auto Scaling resumes adding instances to the load balancer or target group when they are launched. However, Auto Scaling does
not add the instances that were launched while this process was suspended. You must register those instances manually.
For more information on the Suspension and Resumption process, please visit the below U RL: http://docs.aws.amazon.com/autoscaling/latest/userguide/as-suspend-resume-processes.html
NEW QUESTION 6
Your finance supervisor has set a budget of 2000 USD for the resources in AWS. Which of the
following is the simplest way to ensure that you know when this threshold is being reached.
- A. Use Cloudwatch events to notify you when you reach the threshold value
- B. Use the Cloudwatch billing alarm to to notify you when you reach the threshold value
- C. Use Cloudwatch logs to notify you when you reach the threshold value
- D. Use SQS queues to notify you when you reach the threshold value
Answer: B
Explanation:
The AWS documentation mentions
You can monitor your AWS costs by using Cloud Watch. With Cloud Watch, you can create billing alerts that notify you when your usage of your services exceeds
thresholds that you define. You specify these threshold amounts when you create the billing alerts.
When your usage exceeds these amounts, AWS sends you an
email notification. You can also sign up to receive notifications when AWS prices change. For more information on billing alarms, please refer to the below URL:
• http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/mon itor-charges.html
NEW QUESTION 7
You have the following application to be setup in AWS
1) A web tier hosted on EC2 Instances
2) Session data to be written to DynamoDB
3) Log files to be written to Microsoft SQL Server
How can you allow an application to write data to a DynamoDB table?
- A. Add an 1AM user to a running EC2 instance.
- B. Add an 1AM user that allows write access to the DynamoDB table.
- C. Create an 1AM role that allows read access to the DynamoDB table.
- D. Create an 1AM role that allows write access to the DynamoDB table.
Answer: D
Explanation:
I AM roles are designed so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that
the applications use. Instead of creating and distributing your AWS credentials For more information on 1AM Roles please refer to the below link:
http://docs.aws.amazon.com/AWSCC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
NEW QUESTION 8
You have a web application composed of an Auto Scaling group of web servers behind a load balancer, and create a new AMI for each application version for deployment. You have a new version to release, and you want to use the Blue-Green deployment technique to migrate users over in a controlled manner while the size of the fleet remains constant over a period of 6 hours, to ensure that the new version is performing well. What option should you choose to enable this technique while being able to roll back easily? Choose 2 answers from the options given below. Each answer presents part of the solution
- A. Createan Auto Scaling launch configuration with the new AMI to use the new launchconfiguration and to register instances with the new load balancer S
- B. Createan Auto Scaling launch configuration with the new AMI to use the new launchconfiguration and to register instances with the existing load balancer
- C. UseAmazon RouteS3 weighted Round Robin to varythe proportion of requests sent tothe load balancer
- D. -^
- E. ConfigureElastic Load Balancing to varythe proportion of requests sent to instancesrunningthe two application versions.
Answer: AC
Explanation:
The AWS documentation gives this example of a Blue Green deployment
You can shift traffic all at once or you can do a weighted distribution. With Amazon Route 53, you can define a percentage of traffic to go to the green environment and gradually update the weights until the green environment carries the full production traffic. A weighted distribution provides the ability to perform canary analysis where a small percentage of production traffic is introduced to a new environment. You can test the new code and monitor for errors, limiting the blast radius if any issues are encountered. It also allows the green environment to scale out to support the full production load if you're using Clastic Load Balancing, for example
For more information on Blue Green deployments, please refer to the below link:
• https://dOawsstatic.com/whitepapers/AWS_Blue_Green_Deployments.pdf
NEW QUESTION 9
You have a complex system that involves networking, 1AM policies, and multiple, three-tier applications. You are still receiving requirements for the new system, so you don't yet know how many AWS components will be present in the final design. You want to start using AWS CloudFormation to define these AWS resources so that you can automate and version-control your infrastructure. How would you use AWS CloudFormation to provide agile new environments for your customers in a cost-effective, reliable manner?
- A. Manually create one template to encompass all the resources that you need for the system, so you only have a single template to version-control.
- B. Create multiple separate templates for each logical part of the system, create nested stacks in AWS CloudFormation, and maintain several templates to version-contro
- C. •>/
- D. Create multiple separate templates for each logical part of the system, and provide the outputs from one to the next using an Amazon Elastic Compute Cloud (EC2) instance running the SDK forfinergranularity of control.
- E. Manually construct the networking layer using Amazon Virtual Private Cloud (VPC) because this does not change often, and then use AWS CloudFormation to define all other ephemeral resources.
Answer: B
Explanation:
As your infrastructure grows, common patterns can emerge in which you declare the same components in each of your templates. You can separate out these common components and create dedicated templates for them. That way, you can mix and match different templates but use nested stacks to create a single, unified stack. Nested stacks are stacks that create other stacks. To create nested stacks, use the AWS::CloudFormation::Stackresource in your template to reference other templates.
For more information on Cloudformation best practises please refer to the below link: http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/best-practices.html
NEW QUESTION 10
You are currently using Elastic Beanstalk to host your production environment. You need to rollout updates to your application hosted on this environment. This is a critical application which is why there is a requirement that the rollback, if required, should be carried out with the least amount of downtime. Which of the following deployment strategies would ideally help achieve this purpose
- A. Create a Cloudformation template with the same resources as those in the Elastic beanstalk environmen
- B. If the deployment fails, deploy the Cloudformation template.
- C. Use Rolling updates in Elastic Beanstalk so that if the deployment fails, the rolling updates feature would roll back to the last deployment.
- D. Create another parallel environment in elastic beanstal
- E. Use the Swap URL feature.
- F. Create another parallel environment in elastic beanstal
- G. Create a new Route53 Domain name for the new environment and release that url to the users.
Answer: C
Explanation:
Since the requirement is to have the least amount of downtime, the ideal way is to create a blue green deployment environment and then use the Swap URL feature
to swap environments for the new deployment and then do the swap back, incase the deployment fails.
The AWS Documentation mentions the following on the SWAP url feature of Elastic Beanstalk
Because Elastic Beanstalk performs an in-place update when you update your application versions, your application may become unavailable to users for a short period of time. It is possible to avoid this downtime by performing a blue/green deployment, where you deploy the new version to a separate environment, and then swap CNAMCs of the two environments to redirect traffic to the new version instantly.
NEW QUESTION 11
You have a legacy application running that uses an m4.large instance size and cannot scale with Auto Scaling, but only has peak performance 5% of the time. This is a huge waste of resources and money so your Senior Technical Manager has set you the task of trying to reduce costs while still keeping the legacy application running as it should. Which of the following would best accomplish the task your manager has set you? Choose the correct answer from the options below
- A. Use a T2burstable performance instance.
- B. Use a C4.large instance with enhanced networking.
- C. Use two t2.nano instances that have single Root I/O Visualization.
- D. Use t2.nano instance and add spot instances when they are required.
Answer: A
Explanation:
The aws documentation clearly indicates using T2 CC2 instance types for those instances which don't use CPU that often.
T2
T2 instances are Burstable Performance Instances that provide a baseline level of CPU performance with the ability to burst above the baseline.
T2 Unlimited instances can sustain high CPU performance for as long as a workload needs it. For most general-purpose workloads, T2 Unlimited instances will provide ample performance without any additional charges. If the instance needs to run at higher CPU utilization for a prolonged period, it can also do so at a flat additional charge of 5 cents per vCPU-hour.
The baseline performance and ability to burst are governed by CPU Credits. T2 instances receive CPU Credits continuously at a set rate depending on the instance size, accumulating CPU Credits when they are idle, and consuming CPU credits when they are active. T2 instances are a good choice for a variety of general-purpose workloads including micro-services, low-latency interactive applications, small and medium databases, virtual desktops, development, build and stage environments, code repositories, and product prototypes. For more information see Burstable Performance Instances.
For more information on F_C2 instance types please see the below link: https://aws.amazon.com/ec2/instance-types/
NEW QUESTION 12
You are building a mobile app for consumers to post cat pictures online. You will be storing the images in AWS S3. You want to run the system very cheaply and simply. Which one of these options allows you to build a photo sharing application with the right authentication/authorization implementation.
- A. Build the application out using AWS Cognito and web identity federation to allow users to log in using Facebook or Google Account
- B. Once they are logged in, the secret token passed to that user is used to directly access resources on AWS, like AWS S3. ^/
- C. Use JWT or SAML compliant systems to build authorization policie
- D. Users log in with a username and password, and are given a token they can use indefinitely to make calls against the photo infrastructure.C Use AWS API Gateway with a constantly rotating API Key to allow access from the client-sid
- E. Construct a custom build of the SDK and include S3 access in it.
- F. Create an AWS oAuth Service Domain ad grant public signup and access to the domai
- G. During setup, add at least one major social media site as a trusted Identity Provider for users.
Answer: A
Explanation:
Amazon Cognito lets you easily add user sign-up and sign-in and manage permissions for your mobile and web apps. You can create your own user directory within Amazon Cognito. You can also choose to authenticate users through social identity providers such as Facebook, Twitter, or Amazon; with SAML identity solutions; or by using your own identity system. In addition, Amazon Cognito enables you to save data locally on users' devices, allowing your applications to work even when the devices are offline. You can then synchronize data across users' devices so that their app experience remains consistent regardless of the device they use.
For more information on AWS Cognito, please visit the below URL:
• http://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html
NEW QUESTION 13
Which of the following environment types are available in the Elastic Beanstalk environment. Choose 2 answers from the options given below
- A. Single Instance
- B. Multi-Instance
- C. Load Balancing Autoscaling
- D. SQS, Autoscaling
Answer: AC
Explanation:
The AWS Documentation mentions
In Elastic Beanstalk, you can create a load-balancing, autoscaling environment or a single-instance environment. The type of environment that you require depends
on the application that you deploy.
When you go onto the Configuration for your environment, you will be able to see the Environment type from there
NEW QUESTION 14
One of your instances is reporting an unhealthy system status check. However, this is not something you should have to monitor and repair on your own. How might you automate the repair of the system status check failure in an AWS environment? Choose the correct answer from the options given below
- A. Create Cloud Watch alarms for StatuscheckFailed_System metrics and select EC2 action-Recover the instance
- B. Writea script that queries the EC2 API for each instance status check
- C. Writea script that periodically shuts down and starts instances based on certainstats.
- D. Implementa third party monitoring tool.
Answer: A
Explanation:
Using Amazon Cloud Watch alarm actions, you can create alarms that automatically stop, terminate, reboot, or recover your CC2 instances. You can use the stop or terminate actions to help you save money when you no longer need an instance to be running. You can use the reboot and recover actions to automatically reboot those instances or recover them onto new hardware if a system impairment occurs.
For more information on using alarm actions, please refer to the below link: http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/UsingAlarmActions.html
NEW QUESTION 15
You need to perform ad-hoc business analytics queries on well-structured data. Data comes in
constantly at a high velocity. Your business intelligence team can understand SQL.
What AWS service(s) should you look to first?
- A. Kinesis Firehose + RDS
- B. Kinesis Firehose+RedShift
- C. EMR using Hive
- D. EMR running Apache Spark
Answer: B
Explanation:
Amazon Kinesis Firehose is the easiest way to load streaming data into AWS. It can capture, transform, and load streaming data into Amazon Kinesis Analytics, Amazon S3, Amazon Redshift, and Amazon Oasticsearch Sen/ice, enabling near real-time analytics with existing business intelligence tools and
dashboards you're already using today. It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing
administration. It can also batch, compress, and encrypt the data before loading it, minimizing the amount of storage used at the destination and increasing security.
For more information on Kinesis firehose, please visit the below URL:
• https://aws.amazon.com/kinesis/firehose/
Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. You can start with just a few hundred gigabytes of data and scale to a petabyte or more. This enables you to use your data to acquire new insights for your business and customers. For more information on Redshift, please visit the below URL:
http://docs.aws.amazon.com/redshift/latest/mgmt/welcome.html
NEW QUESTION 16
You have an I/O and network-intensive application running on multiple Amazon EC2 instances that cannot handle a large ongoing increase in traffic. The Amazon EC2 instances are using two Amazon EBS PIOPS volumes each, and each instance is identical.
Which of the following approaches should be taken in order to reduce load on the instances with the least disruption to the application?
- A. Createan AMI from each instance, and set up Auto Scaling groups with a largerinstance type that has enhanced networking enabled and is Amazon EBS-optimized.
- B. Stopeach instance and change each instance to a larger Amazon EC2 instance typethat has enhanced networking enabled and is Amazon EBS-optimize
- C. Ensure thatRAID striping is also set up on each instance.
- D. Addan instance-store volume for each running Amazon EC2 instance and implementRAID striping to improve I/O performance.
- E. Addan Amazon EBS volume for each running Amazon EC2 instance and implement RAIDstripingto improve I/O performance.
- F. Createan AMI from an instance, and set up an Auto Scaling group with an instance typethat has enhanced networking enabled and is Amazon EBS-optimized.
Answer: E
Explanation:
The AWS Documentation mentions the following on AMI's
An Amazon Machine Image (AMI) provides the information required to launch an instance, which is a virtual server in the cloud. You specify an AM I when you launch
an instance, and you can launch as many instances from the AMI as you need. You can also launch instances from as many different AMIs as you need.
For more information on AMI's, please visit the link:
• http://docs.aws.amazon.com/AWSCC2/latest/UserGuide/AMIs.html
NEW QUESTION 17
Your company has a set of EC2 Instances that access data objects stored in an S3 bucket. Your IT Security department is concerned about the security of this arhitecture and wants you to implement the following
1) Ensure that the EC2 Instance securely accesses the data objects stored in the S3 bucket
2) Ensure that the integrity of the objects stored in S3 is maintained.
Which of the following would help fulfil the requirements of the IT Security department. Choose 2 answers from the options given below
- A. Createan IAM user and ensure the EC2 Instances uses the IAM user credentials toaccess the data in the bucket.
- B. Createan IAM Role and ensure the EC2 Instances uses the IAM Role to access the datain the bucket.
- C. UseS3 Cross Region replication to replicate the objects so that the integrity ofdata is maintained.
- D. Usean S3 bucket policy that ensures that MFA Delete is set on the objects in thebucket
Answer: BD
Explanation:
The AWS Documentation mentions the following
I AM roles are designed so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that the applications use. Instead of creating and distributing your AWS credentials, you can delegate permission to make API requests using 1AM roles
For more information on 1AM Roles, please refer to the below link:
• http://docs.aws.a mazon.com/AWSCC2/latest/UserGuide/iam-roles-for-amazon-ec2. htmI
MFS Delete can be used to add another layer of security to S3 Objects to prevent accidental deletion of objects. For more information on MFA Delete, please refer to the below link:
• https://aws.amazon.com/blogs/security/securing-access-to-aws-using-mfa-part-3/
NEW QUESTION 18
You are working for a company has an on-premise infrastructure. There is now a decision to move to AWS. The plan is to move the development environment first. There are a lot of custom based applications that need to be deployed for the development community. Which of the following can help to implement the application for the development team?
Choose 2 answers from the options below.
- A. Create docker containers for the customapplication components.
- B. Use OpsWorks to deploy the docker containers.
- C. Use Elastic beanstalk to deploy the dockercontainers.
- D. Use Cloudformation to deploy the dockercontainers.
Answer: AC
Explanation:
The AWS documentation states the following for docker containers on Elastic Beanstalk
Elastic Beanstalk supports the deployment of web applications from Docker containers. With Docker containers, you can define your own runtime environment. You can choose your own platform, programming language, and any application dependencies (such as package managers or tools), that aren't supported by other platforms. Docker containers are self-contained and include all the configuration information and software your web application requires to run.
For more information on docker containers and Elastic beanstalk, please visit the below URL http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.html
NEW QUESTION 19
You are planning on using encrypted snapshots in the design of your AWS Infrastructure. Which of the following statements are true with regards to EBS Encryption
- A. Snapshottingan encrypted volume makes an encrypted snapshot; restoring an encrypted snapshot creates an encrypted volume when specified / requested.
- B. Snapshotting an encrypted volume makes an encrypted snapshot when specified / requested; restoring an encrypted snapshot creates an encrypted volume when specified / requested.
- C. Snapshotting an encrypted volume makes an encrypted snapshot; restoring an encrypted snapshot always creates an encrypted volume.
- D. Snapshotting an encrypted volume makes an encrypted snapshot when specified / requested; restoring an encrypted snapshot always creates an encrypted volume.
Answer: C
Explanation:
Amazon CBS encryption offers you a simple encryption solution for your CBS volumes without the need for you to build, maintain, and secure your own key management infrastructure. When you create an encrypted CBS volume and attach it to a supported instance type, the following types of data are encrypted:
• Data at rest inside the volume
• All data moving between the volume and the instance
• All snapshots created from the volume
Snapshots that are taken from encrypted volumes are automatically encrypted. Volumes that are created from encrypted snapshots are also automatically
encrypted.
For more information on CBS encryption, please visit the below URL:
• http://docs.aws.amazon.com/AWSCC2/latest/UserGuide/ CBSCncryption.html
NEW QUESTION 20
You have an ELB setup in AWS with EC2 instances running behind it. You have been requested to monitor the incoming connections to the ELB. Which of the below options can suffice this requirement?
- A. UseAWSCIoudTrail with your load balancer
- B. Enable access logs on the load balancer
- C. Use a CloudWatch Logs Agent
- D. Create a custom metric CloudWatch filter on your load balancer
Answer: B
Explanation:
Clastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Cach log contains information such as the
time the request was received, the client's IP address, latencies, request paths, and server responses.
You can use these access logs to analyze traffic patterns and to troubleshoot issues.
Option A is invalid because this service will monitor all AWS services Option C and D are invalid since CLB already provides a logging feature.
For more information on ELB access logs, please refer to the below document link: from AWS http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/access-log-collection.html
NEW QUESTION 21
You have decided that you need to change the instance type of your production instances which are running as part of an AutoScaling group. The entire architecture is deployed using CloudFormation Template. You currently have 4 instances in Production. You cannot have any interruption in service and need to ensure 2 instances are always runningduring the update? Which of the options below listed can be used for this?
- A. AutoScalingRollingUpdate
- B. AutoScalingScheduledAction
- C. AutoScalingReplacingUpdate
- D. AutoScalinglntegrationUpdate
Answer: A
Explanation:
The AWS::AutoScaling::AutoScalingGroup resource supports an UpdatePoIicy attribute. This is used to define how an Auto Scalinggroup resource is updated when an update to the Cloud Formation stack occurs. A common approach to updating an Auto Scaling group is to perform a rolling update, which is done by specifying the AutoScalingRollingUpdate policy. This retains the same Auto Scaling group and replaces old instances with new ones, according to the parameters specified. For more information on Autoscaling updates, please refer to the below link: https://aws.amazon.com/premiumsupport/knowledge-center/auto-scaling-group-rolling-updates/
NEW QUESTION 22
Your company is concerned with EBS volume backup on Amazon EC2 and wants to ensure they have proper backups and that the data is durable. What solution would you implement and why? Choose the correct answer from the options below
- A. ConfigureAmazon Storage Gateway with EBS volumes as the data source and store thebackups on premise through the storage gateway
- B. Writea cronjob on the server that compresses the data that needs to be backed upusing gzip compression, then use AWS CLI to copy the data into an S3 bucket for durability
- C. Usea lifecycle policy to back up EBS volumes stored on Amazon S3 for durability
- D. Writea cronjob that uses the AWS CLI to take a snapshot of production EBS volume
- E. The data is durable because EBS snapshots are stored on the Amazon S3 standard storage class
Answer: D
Explanation:
You can take snapshots of CBS volumes and to automate the process you can use the CLI. The snapshots are automatically stored on S3 for durability.
For more information on CBS snapshots, please refer to the below link: http://docs.aws.amazon.com/AWSCC2/latest/UserGuide/CBSSnapshots.html
NEW QUESTION 23
......
Recommend!! Get the Full DOP-C01 dumps in VCE and PDF From DumpSolutions.com, Welcome to Download: https://www.dumpsolutions.com/DOP-C01-dumps/ (New 116 Q&As Version)