Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon Exam DAS-C01 Topic 2 Question 83 Discussion

Actual exam question for Amazon's DAS-C01 exam
Question #: 83
Topic #: 2
[All DAS-C01 Questions]

A large energy company is using Amazon QuickSight to build dashboards and report the historical usage data of its customers This data is hosted in Amazon Redshift The reports need access to all the fact tables' billions ot records to create aggregation in real time grouping by multiple dimensions

A data analyst created the dataset in QuickSight by using a SQL query and not SPICE Business users have noted that the response time is not fast enough to meet their needs

Which action would speed up the response time for the reports with the LEAST implementation effort?

Show Suggested Answer Hide Answer
Suggested Answer: A

Contribute your Thoughts:

Pearlene
5 months ago
Haha, I hear you. Spark can be a bit intimidating, but it's worth it in the long run. Although, I have to say, option D also seems like a decent choice. Stored procedures in Redshift can be pretty efficient, and we wouldn't have to worry about the complexity of Spark. Decisions, decisions...
upvoted 0 times
Alfreda
4 months ago
Haha, I hear you. Spark can be a bit intimidating, but it's worth it in the long run. Although, I have to say, option D also seems like a decent choice. Stored procedures in Redshift can be pretty efficient, and we wouldn't have to worry about the complexity of Spark. Decisions, decisions...
upvoted 0 times
...
Jin
5 months ago
B) Use AWS Glue to create an Apache Spark job that joins the fact table with the dimensions. Load the data into a new table
upvoted 0 times
...
Afton
5 months ago
A) Use QuickSight to modify the current dataset to use SPICE
upvoted 0 times
...
...
Douglass
5 months ago
Ooh, I like that idea! Plus, with Spark, we can leverage its powerful processing capabilities to handle those billions of records. I'm definitely leaning towards option B as well. Though, I have to admit, the thought of dealing with Spark makes my head spin a little. Maybe I should have studied more during the Spark training session.
upvoted 0 times
...
Yuonne
5 months ago
I'm not sure about that. Materialized views can be great, but they require manual maintenance and refreshes. What if the data changes frequently? I think option B, using AWS Glue to create a Spark job, might be a better approach. That way, the data is automatically updated and we don't have to worry about manual maintenance.
upvoted 0 times
...
Cristy
5 months ago
Hmm, this is a tricky one. We need to find a way to speed up the response time without too much implementation effort. I'm leaning towards option C - creating a materialized view in Amazon Redshift. That way, the data is already pre-joined and ready for QuickSight to use, which should improve the performance.
upvoted 0 times
...

Save Cancel