Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon Exam DAS-C01 Topic 7 Question 88 Discussion

Actual exam question for Amazon's DAS-C01 exam
Question #: 88
Topic #: 7
[All DAS-C01 Questions]

A company uses Amazon EC2 instances to receive files from external vendors throughout each day. At the end of each day, the EC2 instances combine the files into a single file, perform gzip compression, and upload the single file to an Amazon S3 bucket. The total size of all the files is approximately 100 GB each day.

When the files are uploaded to Amazon S3, an AWS Batch job runs a COPY command to load the files into an Amazon Redshift cluster.

Which solution will MOST accelerate the COPY process?

Show Suggested Answer Hide Answer
Suggested Answer: B

Contribute your Thoughts:

Broderick
7 days ago
I'm feeling a bit lost on this one. All these options sound like they could work, but I'm not sure which one is the 'most accelerate' solution. Maybe I should just guess and hope for the best?
upvoted 0 times
...
Yuki
8 days ago
Option D sounds interesting, with the idea of sharding the files based on the DISTKEY columns. That could potentially improve the performance, but I'm not sure how easy it would be to implement. I might need to do some research on Redshift's data distribution strategies.
upvoted 0 times
...
Virgina
9 days ago
Hmm, this is a tricky one. I think the key is figuring out how to optimize the COPY process, but I'm not sure which approach is the best. I'm leaning towards option B, as splitting the files to match the number of slices in the Redshift cluster seems like it could help distribute the load.
upvoted 0 times
...
Josephine
10 days ago
I'm not sure about this question. The options seem a bit technical, and I'm not confident in my understanding of Amazon Redshift and COPY commands. I'll have to read through them carefully.
upvoted 0 times
...

Save Cancel