New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon-DEA-C01 Exam Questions

Exam Name: AWS Certified Data Engineer - Associate
Exam Code: Amazon-DEA-C01
Related Certification(s): Amazon AWS Certified Data Engineer Associate Certification
Certification Provider: Amazon
Actual Exam Duration: 130 Minutes
Number of Amazon-DEA-C01 practice questions in our database: 231 (updated: Feb. 27, 2026)
Expected Amazon-DEA-C01 Exam Topics, as suggested by Amazon :
  • Topic 1: Data Ingestion and Transformation: This section assesses data engineers on their ability to design scalable data ingestion pipelines. It focuses on collecting and transforming data from various sources for analysis. Candidates should be skilled in using AWS data services to create secure, optimized ingestion processes that support data analysis.
  • Topic 2: Data Store Management: This domain evaluates database administrators and data engineers who manage AWS data storage. It covers creating and optimizing relational databases, NoSQL databases, and data lakes. The focus is on performance, scalability, and data integrity, ensuring efficient and reliable storage solutions.
  • Topic 3: Data Operations and Support: Targeted at database administrators and engineers, this section covers maintaining and monitoring AWS data workflows. It emphasizes automation, monitoring, troubleshooting, and pipeline optimization, ensuring smooth operations and resolving system issues effectively.
  • Topic 4: Data Security and Governance: This section database cloud security engineers on securing AWS data and ensuring policy compliance. It focuses on access control, encryption, privacy, and auditing, requiring candidates to design governance frameworks that meet regulatory standards.
Disscuss Amazon Amazon-DEA-C01 Topics, Questions or Ask Anything Related
0/2000 characters

Lavelle

2 days ago
Familiarize yourself with the exam format. PASS4SUCCESS practice tests mirrored the actual exam, so I knew exactly what to expect on test day.
upvoted 0 times
...

Sanjuana

10 days ago
Streaming ingestion with ETL and windowing got messy fast; PASS4SUCCESS practice exams gave me a clear method to identify the right windowing and state management.
upvoted 0 times
...

Latricia

17 days ago
I found Kinesis data streams questions and throughput sizing to be brutal, especially under time pressure; PASS4SUCCESS practice made me map the steps quickly and accurately.
upvoted 0 times
...

Corazon

24 days ago
The hardest topic was IAM permissions and resource access in data engineering workflows; the practice tests highlighted subtle pitfalls, and PASS4SUCCESS prep trained me to catch them.
upvoted 0 times
...

Laine

1 month ago
I felt overwhelmed by AWS services overlap, but PASS4SUCCESS clarified it with focused drills and explanations, and now I'm crossing the finish line—believe in your preparation, you'll succeed.
upvoted 0 times
...

Cecily

1 month ago
I struggled with data modeling for analytics and understanding when to use lakehouse vs. data warehouse patterns; PASS4SUCCESS drills on schema design really helped me see the right trade-offs.
upvoted 0 times
...

Marsha

2 months ago
Performance tuning questions were frequent. Know about Redshift WLM and query optimization. Pass4Success really helped me prepare efficiently!
upvoted 0 times
...

Felix

2 months ago
Passing the AWS Certified Data Engineer - Associate exam was a relief, and the Pass4Success practice questions were a great help. One question that left me guessing was about Data Ingestion and Transformation, focusing on the use of Amazon Kinesis Firehose for data delivery. I wasn't completely confident about the buffering options, but I passed.
upvoted 0 times
...

Cammy

2 months ago
I successfully cleared the AWS Certified Data Engineer - Associate exam, thanks to the Pass4Success practice questions. A challenging question involved Data Operations and Support, particularly about using CloudWatch for monitoring AWS Glue jobs. I was unsure about the specific metrics to track, but I passed nonetheless.
upvoted 0 times
...

Floyd

2 months ago
Don't underestimate the breadth of the exam. PASS4SUCCESS practice exams covered a wide range of topics, ensuring I was well-rounded in my preparation.
upvoted 0 times
...

Karma

2 months ago
Data encryption scenarios were common. Understand client-side and server-side encryption options. The exam was tough but I passed!
upvoted 0 times
...

Denise

3 months ago
My hands trembled before the exam, yet PASS4SUCCESS provided structured labs and quick reviews that boosted my confidence; keep studying consistently, future test-taker, you've got this.
upvoted 0 times
...

Kenneth

3 months ago
Having passed the AWS Certified Data Engineer - Associate exam, I can attest to the usefulness of the Pass4Success practice questions. A question that caught me off guard was related to Data Security and Governance, specifically about implementing VPC endpoints for S3 access. I hesitated on the configuration details, but I succeeded.
upvoted 0 times
...

Zona

3 months ago
Revise, revise, revise. PASS4SUCCESS practice tests allowed me to identify and repeatedly practice the most critical concepts.
upvoted 0 times
...

Cyril

4 months ago
Confidence is key! PASS4SUCCESS practice exams boosted my confidence and made me feel prepared to tackle the real thing.
upvoted 0 times
...

Ryann

4 months ago
Manage your time wisely during the exam. PASS4SUCCESS practice tests taught me how to pace myself and allocate the right amount of time for each question.
upvoted 0 times
...

Sue

4 months ago
I was nervous about the breadth of topics, but PASS4SUCCESS walked me through practice exams and concise notes, giving me confidence to tackle the real test—you can do this too.
upvoted 0 times
...

Pamela

4 months ago
The toughest part for me was designing data pipelines for cost-and-performance optimization; the tricky questions on partition pruning and materialized views kept tripping me up until PASS4SUCCESS practice exams clarified the approach.
upvoted 0 times
...

Tammara

5 months ago
Passing the AWS Data Engineer exam was a game-changer for me. PASS4SUCCESS practice exams were a lifesaver - they really helped me identify my weak areas and focus my studies.
upvoted 0 times
...

Chantell

5 months ago
The AWS Certified Data Engineer - Associate exam is now behind me, and the Pass4Success practice questions were quite helpful. One question that puzzled me was about Data Store Management, particularly regarding the use of Amazon Aurora for high availability. I wasn't entirely sure about the failover mechanisms, but I managed to pass.
upvoted 0 times
...

Jame

5 months ago
I recently passed the AWS Certified Data Engineer - Associate exam, with the help of Pass4Success practice questions. A question that left me pondering was from the Data Ingestion and Transformation domain, asking about the use of AWS Lambda for data transformation tasks. I was unsure about the memory and timeout configurations, but I got through it.
upvoted 0 times
...

Stanton

5 months ago
Passed my AWS Certified Data Engineer exam today! Pass4Success's materials were perfect for last-minute prep. Thank you!
upvoted 0 times
...

Margot

5 months ago
Clearing the AWS Certified Data Engineer - Associate exam was a great achievement, aided by the Pass4Success practice questions. A question that challenged me was about Data Operations and Support, specifically concerning the automation of ETL jobs using AWS Data Pipeline. I wasn't sure about the best way to handle job dependencies, but I passed.
upvoted 0 times
...

Stefanie

5 months ago
Questions on data lifecycle management appeared. Study S3 Lifecycle policies and Glacier retrieval options. Thanks Pass4Success for the great preparation!
upvoted 0 times
...

Nakisha

6 months ago
I passed the AWS Certified Data Engineer - Associate exam, and the Pass4Success practice questions were a key resource. One question that I found difficult was related to Data Security and Governance, focusing on IAM policies for cross-account access. I was uncertain about the correct policy structure, but I still managed to pass.
upvoted 0 times
...

Carlene

6 months ago
AWS Data Engineer certification complete! Pass4Success's relevant questions were crucial. Thanks for the quick study resources!
upvoted 0 times
...

Bea

6 months ago
Data monitoring and alerting questions came up. Know CloudWatch metrics and alarms for data services. Passed the exam with confidence!
upvoted 0 times
...

Tomoko

8 months ago
Serverless data processing was a key topic. Understand AWS Lambda integrations with data services. Pass4Success materials were incredibly accurate!
upvoted 0 times
...

Carlota

8 months ago
Just aced the AWS Data Engineer cert! Pass4Success's exam questions were a lifesaver. Thank you for the efficient preparation!
upvoted 0 times
...

Justine

8 months ago
Data partitioning strategies were important. Study partitioning in S3, Athena, and Redshift. The exam was challenging but I managed to pass!
upvoted 0 times
...

Dallas

9 months ago
Real-time analytics questions appeared. Know Kinesis Data Analytics and its use cases. Thanks Pass4Success for the comprehensive prep!
upvoted 0 times
...

Hui

9 months ago
AWS Certified Data Engineer - nailed it! Pass4Success's practice questions were invaluable. Thanks for the time-saving prep!
upvoted 0 times
...

Leonor

10 months ago
Data replication scenarios were tested. Understand Aurora's global database feature and DynamoDB global tables. Passed with flying colors!
upvoted 0 times
...

Rosenda

10 months ago
Passed the AWS Data Engineer exam with flying colors! Pass4Success's materials were spot-on. Thanks for the quick study guide!
upvoted 0 times
...

Diego

10 months ago
Cost optimization for data solutions was emphasized. Study Reserved Instances and Savings Plans. Pass4Success really prepared me well!
upvoted 0 times
...

Elbert

11 months ago
Machine learning integration questions came up. Know SageMaker's capabilities for data processing. The exam was tough but I passed!
upvoted 0 times
...

Johnetta

11 months ago
AWS Data Engineer cert achieved! Pass4Success's relevant questions made all the difference. Grateful for the speedy prep!
upvoted 0 times
...

Fletcher

11 months ago
Data archiving and retrieval was tested. Understand Glacier storage classes and retrieval options. Pass4Success materials were incredibly helpful!
upvoted 0 times
...

Andra

12 months ago
NoSQL database questions appeared frequently. Study DynamoDB's capacity modes and access patterns. Passed the exam with confidence!
upvoted 0 times
...

Kaitlyn

12 months ago
Just became an AWS Certified Data Engineer! Pass4Success's questions were perfect for quick preparation. Thank you!
upvoted 0 times
...

Cecilia

1 year ago
Data migration scenarios were common. Know AWS Database Migration Service (DMS) and Snowball options. Thanks Pass4Success for the great prep!
upvoted 0 times
...

Marquetta

1 year ago
Stream processing was emphasized. Understand Kinesis Analytics and its SQL-based processing. Exam was challenging but I managed to pass!
upvoted 0 times
...

Wade

1 year ago
AWS Certified Data Engineer - done! Pass4Success's practice tests were key to my success. Thanks for the efficient prep!
upvoted 0 times
...

Glory

1 year ago
Lots of questions on data quality and governance. Familiarize yourself with AWS Glue DataBrew and Lake Formation. Pass4Success made a big difference!
upvoted 0 times
...

Tatum

1 year ago
Data visualization questions appeared. Know QuickSight's capabilities and integration with other AWS services. The exam was tough but I passed!
upvoted 0 times
...

Melodie

1 year ago
Having passed the AWS Certified Data Engineer - Associate exam, I must say that the Pass4Success practice questions were beneficial. A question that stumped me was from the Data Store Management domain, asking about the differences in consistency models between Amazon S3 and Amazon DynamoDB. I was a bit unsure about eventual consistency implications, but I succeeded.
upvoted 0 times
...

Vicki

1 year ago
Passed my AWS Data Engineer cert today! Pass4Success's exam questions were incredibly helpful. Thank you!
upvoted 0 times
...

Gaston

1 year ago
Data warehousing with Redshift was a major focus. Study Redshift Spectrum and query optimization techniques. Pass4Success materials were spot on!
upvoted 0 times
...

Pedro

1 year ago
Security questions were frequent. Understand IAM roles, KMS encryption, and VPC configurations for data services. Passed thanks to thorough preparation!
upvoted 0 times
...

Tanesha

1 year ago
The AWS Certified Data Engineer - Associate exam is behind me now, and the Pass4Success practice questions were quite helpful. One question that left me guessing was about Data Ingestion and Transformation, particularly regarding the use of Kinesis Data Streams for real-time data processing. I wasn't completely confident about the shard management strategies, but I passed.
upvoted 0 times
...

Fredric

1 year ago
AWS Data Engineer exam: check! Pass4Success's materials were a time-saver. Couldn't have done it without you!
upvoted 0 times
...

Glenn

1 year ago
Data catalog management came up often. Know the differences between Glue Data Catalog and Lake Formation. Pass4Success really helped me prepare quickly!
upvoted 0 times
...

Eliseo

1 year ago
I successfully passed the AWS Certified Data Engineer - Associate exam, thanks in part to the Pass4Success practice questions. A challenging question involved Data Operations and Support, specifically about monitoring and optimizing AWS Redshift clusters. I was unsure about the best metrics to monitor for performance tuning, but I managed to pass regardless.
upvoted 0 times
...

Shawna

1 year ago
Data transformation was a key topic. Review Glue ETL jobs and AWS Lambda for serverless transformations. The exam was challenging but manageable.
upvoted 0 times
...

Eloisa

1 year ago
Passing the AWS Certified Data Engineer - Associate exam was a relief, and the Pass4Success practice questions played a part in that. One question that puzzled me was from the Data Security and Governance domain, asking about the best practices for implementing encryption at rest in Amazon S3. I hesitated between using SSE-S3 and SSE-KMS, but it worked out in the end.
upvoted 0 times
...

Daron

1 year ago
Wow, aced the AWS Data Engineer cert! Pass4Success made it possible with their relevant practice questions. Grateful!
upvoted 0 times
...

Lashonda

1 year ago
Encountered several questions on data ingestion. Make sure you understand Kinesis Data Streams vs. Firehose. Thanks Pass4Success for the great prep!
upvoted 0 times
...

Edgar

1 year ago
I recently cleared the AWS Certified Data Engineer - Associate exam, and the Pass4Success practice questions were a great help. A tricky question I encountered was related to Data Store Management, specifically about the differences between Amazon RDS and DynamoDB for handling transactional workloads. I was a bit uncertain about the nuances of ACID compliance in both services, but I got through it.
upvoted 0 times
...

Ressie

1 year ago
Just passed the AWS Certified Data Engineer - Associate exam! Data Lake questions were prevalent. Study S3 storage classes and access patterns.
upvoted 0 times
...

Ilene

1 year ago
Just passed the AWS Certified Data Engineer exam! Pass4Success's questions were spot-on. Thanks for the quick prep!
upvoted 0 times
...

Karina

1 year ago
Having just passed the AWS Certified Data Engineer - Associate exam, I can say that the Pass4Success practice questions were instrumental in my preparation. One question that caught me off guard was about the best practices for setting up data pipelines in AWS Glue, which falls under the Data Ingestion and Transformation domain. I wasn't entirely sure about the optimal way to handle schema evolution in Glue, but thankfully, I still managed to pass.
upvoted 0 times
...

Free Amazon Amazon-DEA-C01 Exam Actual Questions

Note: Premium Questions for Amazon-DEA-C01 were last updated On Feb. 27, 2026 (see below)

Question #1

An ecommerce company wants to use AWS to migrate data pipelines from an on-premises environment into the AWS Cloud. The company currently uses a third-party too in the on-premises environment to orchestrate data ingestion processes.

The company wants a migration solution that does not require the company to manage servers. The solution must be able to orchestrate Python and Bash scripts. The solution must not require the company to refactor any code.

Which solution will meet these requirements with the LEAST operational overhead?

Reveal Solution Hide Solution
Correct Answer: B

The ecommerce company wants to migrate its data pipelines into the AWS Cloud without managing servers, and the solution must orchestrate Python and Bash scripts without refactoring code. Amazon Managed Workflows for Apache Airflow (Amazon MWAA) is the most suitable solution for this scenario.

Option B: Amazon Managed Workflows for Apache Airflow (Amazon MWAA) MWAA is a managed orchestration service that supports Python and Bash scripts via Directed Acyclic Graphs (DAGs) for workflows. It is a serverless, managed version of Apache Airflow, which is commonly used for orchestrating complex data workflows, making it an ideal choice for migrating existing pipelines without refactoring. It supports Python, Bash, and other scripting languages, and the company would not need to manage the underlying infrastructure.

Other options:

AWS Lambda (Option A) is more suited for event-driven workflows but would require breaking down the pipeline into individual Lambda functions, which may require refactoring.

AWS Step Functions (Option C) is good for orchestration but lacks native support for Python and Bash without using Lambda functions, and it may require code changes.

AWS Glue (Option D) is an ETL service primarily for data transformation and not suitable for orchestrating general scripts without modification.


Amazon Managed Workflows for Apache Airflow (MWAA) Documentation

Question #2

A data engineer needs to create a new empty table in Amazon Athena that has the same schema as an existing table named old-table.

Which SQL statement should the data engineer use to meet this requirement?

A.

B.

C.

D.

Reveal Solution Hide Solution
Correct Answer: D

Problem Analysis:

The goal is to create a new empty table in Athena with the same schema as an existing table (old_table).

The solution must avoid copying any data.

Key Considerations:

CREATE TABLE AS (CTAS) is commonly used in Athena for creating new tables based on an existing table.

Adding the WITH NO DATA clause ensures only the schema is copied, without transferring any data.

Solution Analysis:

Option A: Copies both schema and data. Does not meet the requirement for an empty table.

Option B: Inserts data into an existing table, which does not create a new table.

Option C: Creates an empty table but does not copy the schema.

Option D: Creates a new table with the same schema and ensures it is empty by using WITH NO DATA.

Final Recommendation:

Use D. CREATE TABLE new_table AS (SELECT * FROM old_table) WITH NO DATA to create an empty table with the same schema.


Athena CTAS Queries

CREATE TABLE Statement in Athena

Question #3

A media company wants to improve a system that recommends media content to customer based on user behavior and preferences. To improve the recommendation system, the company needs to incorporate insights from third-party datasets into the company's existing analytics platform.

The company wants to minimize the effort and time required to incorporate third-party datasets.

Which solution will meet these requirements with the LEAST operational overhead?

Reveal Solution Hide Solution
Correct Answer: A

AWS Data Exchange is a service that makes it easy to find, subscribe to, and use third-party data in the cloud. It provides a secure and reliable way to access and integrate data from various sources, such as data providers, public datasets, or AWS services. Using AWS Data Exchange, you can browse and subscribe to data products that suit your needs, and then use API calls or the AWS Management Console to export the data to Amazon S3, where you can use it with your existing analytics platform. This solution minimizes the effort and time required to incorporate third-party datasets, as you do not need to set up and manage data pipelines, storage, or access controls.You also benefit from the data quality and freshness provided by the data providers, who can update their data products as frequently as needed12.

The other options are not optimal for the following reasons:

B . Use API calls to access and integrate third-party datasets from AWS. This option is vague and does not specify which AWS service or feature is used to access and integrate third-party datasets. AWS offers a variety of services and features that can help with data ingestion, processing, and analysis, but not all of them are suitable for the given scenario.For example, AWS Glue is a serverless data integration service that can help you discover, prepare, and combine data from various sources, but it requires you to create and run data extraction, transformation, and loading (ETL) jobs, which can add operational overhead3.

C . Use Amazon Kinesis Data Streams to access and integrate third-party datasets from AWS CodeCommit repositories. This option is not feasible, as AWS CodeCommit is a source control service that hosts secure Git-based repositories, not a data source that can be accessed by Amazon Kinesis Data Streams. Amazon Kinesis Data Streams is a service that enables you to capture, process, and analyze data streams in real time, such as clickstream data, application logs, or IoT telemetry. It does not support accessing and integrating data from AWS CodeCommit repositories, which are meant for storing and managing code, not data .

D . Use Amazon Kinesis Data Streams to access and integrate third-party datasets from Amazon Elastic Container Registry (Amazon ECR). This option is also not feasible, as Amazon ECR is a fully managed container registry service that stores, manages, and deploys container images, not a data source that can be accessed by Amazon Kinesis Data Streams. Amazon Kinesis Data Streams does not support accessing and integrating data from Amazon ECR, which is meant for storing and managing container images, not data .


1: AWS Data Exchange User Guide

2: AWS Data Exchange FAQs

3: AWS Glue Developer Guide

: AWS CodeCommit User Guide

: Amazon Kinesis Data Streams Developer Guide

: Amazon Elastic Container Registry User Guide

: Build a Continuous Delivery Pipeline for Your Container Images with Amazon ECR as Source

Question #4

A company is migrating its database servers from Amazon EC2 instances that run Microsoft SQL Server to Amazon RDS for Microsoft SQL Server DB instances. The company's analytics team must export large data elements every day until the migration is complete. The data elements are the result of SQL joins across multiple tables. The data must be in Apache Parquet format. The analytics team must store the data in Amazon S3.

Which solution will meet these requirements in the MOST operationally efficient way?

Reveal Solution Hide Solution
Correct Answer: A

Option A is the most operationally efficient way to meet the requirements because it minimizes the number of steps and services involved in the data export process. AWS Glue is a fully managed service that can extract, transform, and load (ETL) data from various sources to various destinations, including Amazon S3. AWS Glue can also convert data to different formats, such as Parquet, which is a columnar storage format that is optimized for analytics. By creating a view in the SQL Server databases that contains the required data elements, the AWS Glue job can select the data directly from the view without having to perform any joins or transformations on the source data. The AWS Glue job can then transfer the data in Parquet format to an S3 bucket and run on a daily schedule.

Option B is not operationally efficient because it involves multiple steps and services to export the data. SQL Server Agent is a tool that can run scheduled tasks on SQL Server databases, such as executing SQL queries. However, SQL Server Agent cannot directly export data to S3, so the query output must be saved as .csv objects on the EC2 instance. Then, an S3 event must be configured to trigger an AWS Lambda function that can transform the .csv objects to Parquet format and upload them to S3. This option adds complexity and latency to the data export process and requires additional resources and configuration.

Option C is not operationally efficient because it introduces an unnecessary step of running an AWS Glue crawler to read the view. An AWS Glue crawler is a service that can scan data sources and create metadata tables in the AWS Glue Data Catalog. The Data Catalog is a central repository that stores information about the data sources, such as schema, format, and location. However, in this scenario, the schema and format of the data elements are already known and fixed, so there is no need to run a crawler to discover them. The AWS Glue job can directly select the data from the view without using the Data Catalog. Running a crawler adds extra time and cost to the data export process.

Option D is not operationally efficient because it requires custom code and configuration to query the databases and transform the data. An AWS Lambda function is a service that can run code in response to events or triggers, such as Amazon EventBridge. Amazon EventBridge is a service that can connect applications and services with event sources, such as schedules, and route them to targets, such as Lambda functions. However, in this scenario, using a Lambda function to query the databases and transform the data is not the best option because it requires writing and maintaining code that uses JDBC to connect to the SQL Server databases, retrieve the required data, convert the data to Parquet format, and transfer the data to S3. This option also has limitations on the execution time, memory, and concurrency of the Lambda function, which may affect the performance and reliability of the data export process.


AWS Certified Data Engineer - Associate DEA-C01 Complete Study Guide

AWS Glue Documentation

Working with Views in AWS Glue

Converting to Columnar Formats

Question #5

A retail company uses an Amazon Redshift data warehouse and an Amazon S3 bucket. The company ingests retail order data into the S3 bucket every day.

The company stores all order data at a single path within the S3 bucket. The data has more than 100 columns. The company ingests the order data from a third-party application that generates more than 30 files in CSV format every day. Each CSV file is between 50 and 70 MB in size.

The company uses Amazon Redshift Spectrum to run queries that select sets of columns. Users aggregate metrics based on daily orders. Recently, users have reported that the performance of the queries has degraded. A data engineer must resolve the performance issues for the queries.

Which combination of steps will meet this requirement with LEAST developmental effort? (Select TWO.)

Reveal Solution Hide Solution
Correct Answer: A, C

The performance issue in Amazon Redshift Spectrum queries arises due to the nature of CSV files, which are row-based storage formats. Spectrum is more optimized for columnar formats, which significantly improve performance by reducing the amount of data scanned. Also, partitioning data based on relevant columns like order date can further reduce the amount of data scanned, as queries can focus only on the necessary partitions.

A . Configure the third-party application to create the files in a columnar format:

Columnar formats (like Parquet or ORC) store data in a way that is optimized for analytical queries because they allow queries to scan only the columns required, rather than scanning all columns in a row-based format like CSV.

Amazon Redshift Spectrum works much more efficiently with columnar formats, reducing the amount of data that needs to be scanned, which improves query performance.


C . Partition the order data in the S3 bucket based on order date:

Partitioning the data on columns like order date allows Redshift Spectrum to skip scanning unnecessary partitions, leading to improved query performance.

By organizing data into partitions, you minimize the number of files Spectrum has to read, further optimizing performance.

Alternatives Considered:

B (Develop an AWS Glue ETL job): While consolidating files can improve performance by reducing the number of small files (which can be inefficient to process), it adds additional ETL complexity. Switching to a columnar format (Option A) and partitioning (Option C) provides more significant performance improvements with less development effort.

D and E (JSON-related options): Using JSON format or the SUPER type in Redshift introduces complexity and isn't as efficient as the proposed solutions, especially since JSON is not a columnar format.

Amazon Redshift Spectrum Documentation

Columnar Formats and Data Partitioning in S3


Unlock Premium Amazon-DEA-C01 Exam Questions with Advanced Practice Test Features:
  • Select Question Types you want
  • Set your Desired Pass Percentage
  • Allocate Time (Hours : Minutes)
  • Create Multiple Practice tests with Limited Questions
  • Customer Support
Get Full Access Now

Save Cancel