Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon DBS-C01 Exam

Certification Provider: Amazon
Exam Name: AWS Certified Database - Specialty
Number of questions in our database: 322
Exam Version: Mar. 23, 2024
DBS-C01 Exam Official Topics:
  • Topic 1: Determine access control and authentication mechanisms/ Determine strategies for disaster recovery and high availability
  • Topic 2: Managethe operational environment of a database solutionDomain/ Design database solutions for performance, compliance, and scalability
  • Topic 3: Recognize potential security vulnerabilities within database solutions/ Workload-Specific Database Design
  • Topic 4: Determine monitoring and alerting strategies/ Troubleshootand resolve common database issues
  • Topic 5: Determine data preparation and migration strategies/ Automate database solution deployments
  • Topic 6: Select appropriate database services for specific types of dataand workloads/ Optimize database performance
  • Topic 7: Comparethe costs of database solutions/ Determinemaintenance tasks and processes/ Determinebackup and restore strategies
  • Topic 8: Encryptdata atrest and intransit/ Executeand validate data migration/ Monitoring and Troubleshooting
  • Topic 9: Evaluateauditing solutions/ Deployment and Migration/ Management and Operations/ Database Security
Disscuss Amazon DBS-C01 Topics, Questions or Ask Anything Related

Currently there are no comments in this discussion, be the first to comment!

Free Amazon DBS-C01 Exam Actual Questions

The questions for DBS-C01 were last updated On Mar. 23, 2024

Question #1

A healthcare company is running an application on Amazon EC2 in a public subnet and using Amazon DocumentDB (with MongoDB compatibility) as the storage layer. An audit reveals that the traffic between the application and Amazon DocumentDB is not encrypted and that the DocumentDB cluster is not encrypted at rest. A database specialist must correct these issues and ensure that the data in transit and the data at rest are encrypted.

Which actions should the database specialist take to meet these requirements? (Select TWO.)

Reveal Solution Hide Solution
Correct Answer: B, C

Question #3

A database specialist is launching a test graph database using Amazon Neptune for the first time. The database specialist needs to insert millions of rows of test observations from a .csv file that is stored in Amazon S3. The database specialist has been using a series of API calls to upload the data to the Neptune DB instance.

Which combination of steps would allow the database specialist to upload the data faster? (Choose three.)

Reveal Solution Hide Solution
Correct Answer: B, E, F

Correct Answer: B, E, F

Explanation from Amazon documents:

To upload data faster to a Neptune DB instance from a .csv file stored in Amazon S3, the database specialist should use the Neptune Bulk Loader, which is a feature that allows you to load data from external files directly into a Neptune DB instance1. The Neptune Bulk Loader is faster and has less overhead than the API calls, such as SPARQL INSERT statements or Gremlin addV and addE steps2. The Neptune Bulk Loader supports both RDF and Gremlin data formats1.

To use the Neptune Bulk Loader, the database specialist needs to do the following13:

Ensure the vertices and edges are specified in different .csv files with proper header column formatting. This is required for the Gremlin data format, which uses two .csv files: one for vertices and one for edges. The first row of each file must contain the column names, which must match the property names of the graph elements. The files must also have a column named ~id for vertices and ~from and ~to for edges, which specify the unique identifiers of the graph elements1.

Ensure an IAM role for the Neptune DB instance is configured with the appropriate permissions to allow access to the file in the S3 bucket. This is required for the Neptune DB instance to read the data from the S3 bucket. The IAM role must have a trust policy that allows Neptune to assume the role, and a permissions policy that allows access to the S3 bucket and objects3.

Create an S3 VPC endpoint and issue an HTTP POST to the database's loader endpoint. This is required for the Neptune DB instance to communicate with the S3 bucket without going through the public internet. The S3 VPC endpoint must be in the same VPC as the Neptune DB instance. The HTTP POST request must specify the source parameter as the S3 URI of the .csv file, and optionally other parameters such as format, failOnError, parallelism, etc1.

Therefore, option B, E, and F are the correct steps to upload the data faster. Option A is not necessary because Amazon Cognito is not used for authenticating the Neptune DB instance to the S3 bucket. Option C is not suitable because AWS DMS is not designed for loading graph data into Neptune. Option D is not efficient because curling the S3 URI and running the addVertex or addEdge commands will be slower and more costly than using the Neptune Bulk Loader.


Question #4

An online bookstore uses Amazon Aurora MySQL as its backend database. After the online bookstore added a popular book to the online catalog, customers began reporting intermittent timeouts on the checkout page. A database specialist determined that increased load was causing locking contention on the database. The database specialist wants to automatically detect and diagnose database performance issues and to resolve bottlenecks faster.

Which solution will meet these requirements?

Reveal Solution Hide Solution
Correct Answer: A

Correct Answer: A

Explanation from Amazon documents:

Performance Insights is a feature of Amazon Aurora MySQL that helps you quickly assess the load on your database and determine when and where to take action. Performance Insights displays a dashboard that shows the database load in terms of average active sessions (AAS), which is the average number of sessions that are actively running SQL statements at any given time. Performance Insights also shows the top SQL statements, waits, hosts, and users that are contributing to the database load.

Amazon DevOps Guru is a fully managed service that helps you improve the operational performance and availability of your applications by detecting operational issues and recommending specific actions for remediation. Amazon DevOps Guru applies machine learning to automatically analyze data such as application metrics, logs, events, and traces for behaviors that deviate from normal operating patterns. Amazon DevOps Guru supports Amazon RDS as a resource type and can monitor the performance and availability of your RDS databases.

By turning on Performance Insights for the Aurora MySQL database and configuring and turning on Amazon DevOps Guru for RDS, the database specialist can automatically detect and diagnose database performance issues and resolve bottlenecks faster. This solution will allow the database specialist to monitor the database load and identify the root causes of performance problems using Performance Insights, and receive actionable insights and recommendations from Amazon DevOps Guru to improve the operational performance and availability of the database.

Therefore, option A is the correct solution to meet the requirements. Option B is not sufficient because creating a CPU usage alarm will only notify the database specialist when the CPU utilization is high, but it will not help diagnose or resolve the database performance issues. Option C is not efficient because using the Amazon RDS query editor to get the process ID of the query that is causing the database to lock and running a command to end the process will require manual intervention and may cause data loss or inconsistency. Option D is not efficient because using the SELECT INTO OUTFILE S3 statement to query data from the database and saving the data directly to an Amazon S3 bucket will incur additional time and cost, and using Amazon Athena to analyze the files for long-running queries will not help prevent or resolve locking contention on the database.


Question #5

A company uses an Amazon Redshift cluster to run its analytical workloads. Corporate policy requires that the company's data be encrypted at rest with customer managed keys. The company's disaster recovery plan requires that backups of the cluster be copied into another AWS Region on a regular basis.

How should a database specialist automate the process of backing up the cluster data in compliance with these policies?

Reveal Solution Hide Solution
Correct Answer: B

According to the Amazon Redshift documentation1, you can enable database encryption for your clusters to help protect data at rest. You can use either AWS Key Management Service (AWS KMS) or a hardware security module (HSM) to manage the top-level encryption keys in this hierarchy. The process that Amazon Redshift uses for encryption differs depending on how you manage keys.

To copy encrypted snapshots across Regions, you need to create a snapshot copy grant in the destination Region and specify a CMK in that Region. You also need to configure cross-Region snapshots in the source Region and provide the destination Region, the snapshot copy grant, and retention periods for the snapshots. This way, you can automate the process of backing up the cluster data in compliance with the corporate policies.



Unlock all DBS-C01 Exam Questions with Advanced Practice Test Features:
  • Select Question Types you want
  • Set your Desired Pass Percentage
  • Allocate Time (Hours : Minutes)
  • Create Multiple Practice tests with Limited Questions
  • Customer Support
Get Full Access Now

Save Cancel