Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon Exam DAS-C01 Topic 3 Question 84 Discussion

Actual exam question for Amazon's DAS-C01 exam
Question #: 84
Topic #: 3
[All DAS-C01 Questions]

A company is building an analytical solution that includes Amazon S3 as data lake storage and Amazon Redshift for data warehousing. The company wants to use Amazon Redshift Spectrum to query the data that is stored in Amazon S3.

Which steps should the company take to improve performance when the company uses Amazon Redshift Spectrum to query the S3 data files? (Select THREE )

Use gzip compression with individual file sizes of 1-5 GB

Show Suggested Answer Hide Answer
Suggested Answer: B, C, D

Contribute your Thoughts:

Celia
9 days ago
Good point. But I'm not sure about the file size recommendation. Keeping all files about the same size, between 1-5 GB, sounds more efficient than splitting them into KB-sized files.
upvoted 0 times
...
Leonida
10 days ago
Absolutely, and partitioning the data based on common query predicates is also a great idea. That way, Redshift Spectrum can quickly identify the relevant partitions and avoid scanning unnecessary data.
upvoted 0 times
...
Juliana
11 days ago
I agree, this is a complex issue. From what I understand, we should definitely use a columnar storage file format like Parquet or ORC. That will help with efficient data access and processing.
upvoted 0 times
...
Robt
12 days ago
Hmm, this is a tricky question. We need to consider a few factors to improve Redshift Spectrum performance when querying S3 data files.
upvoted 0 times
...

Save Cancel