Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon Exam Amazon-DEA-C01 Topic 2 Question 12 Discussion

Actual exam question for Amazon's Amazon-DEA-C01 exam
Question #: 12
Topic #: 2
[All Amazon-DEA-C01 Questions]

A data engineer is building an automated extract, transform, and load (ETL) ingestion pipeline by using AWS Glue. The pipeline ingests compressed files that are in an Amazon S3 bucket. The ingestion pipeline must support incremental data processing.

Which AWS Glue feature should the data engineer use to meet this requirement?

Show Suggested Answer Hide Answer
Suggested Answer: C

Problem Analysis:

The pipeline processes compressed files in S3 and must support incremental data processing.

AWS Glue features must facilitate tracking progress to avoid reprocessing the same data.

Key Considerations:

Incremental data processing requires tracking which files or partitions have already been processed.

The solution must be automated and efficient for large-scale ETL jobs.

Solution Analysis:

Option A: Workflows

Workflows organize and orchestrate multiple Glue jobs but do not track progress for incremental data processing.

Option B: Triggers

Triggers initiate Glue jobs based on a schedule or events but do not track which data has been processed.

Option C: Job Bookmarks

Job bookmarks track the state of the data that has been processed, enabling incremental processing.

Automatically skip files or partitions that were previously processed in Glue jobs.

Option D: Classifiers

Classifiers determine the schema of incoming data but do not handle incremental processing.

Final Recommendation:

Job bookmarks are specifically designed to enable incremental data processing in AWS Glue ETL pipelines.


AWS Glue Job Bookmarks Documentation

AWS Glue ETL Features

Contribute your Thoughts:

Gracia
2 months ago
I believe triggers could also be used for incremental data processing in the AWS Glue pipeline.
upvoted 0 times
...
Ernie
2 months ago
Haha, I bet the data engineer is wishing they had a 'Lazy' feature to just do all the work for them. But C. Job bookmarks is probably the way to go here.
upvoted 0 times
Malcom
1 months ago
C: Triggers might be helpful for scheduling the pipeline to run at specific times.
upvoted 0 times
...
Erinn
2 months ago
B: Workflows could also be useful for organizing the ETL process.
upvoted 0 times
...
Johnetta
2 months ago
A: Yeah, I agree. Job bookmarks would definitely help with incremental data processing.
upvoted 0 times
...
...
Cecilia
2 months ago
I'm going to go with C. Job bookmarks. Seems like the perfect tool for keeping track of where the pipeline left off and picking up from there on the next run.
upvoted 0 times
...
Martina
2 months ago
I agree with Julio, job bookmarks keep track of processed data and support incremental processing.
upvoted 0 times
...
Mauricio
2 months ago
Hmm, I'm torn between B. Triggers and C. Job bookmarks. Triggers could be used to kick off the pipeline based on new file arrivals, but bookmarks might be better for actually tracking the incremental progress.
upvoted 0 times
Reyes
1 months ago
You make a good point, maybe we can use both features together for a more robust solution.
upvoted 0 times
...
Johnetta
2 months ago
But wouldn't B. Triggers help kick off the pipeline when new files arrive?
upvoted 0 times
...
Lorrine
2 months ago
I think you're right, C. Job bookmarks would be better for tracking incremental progress.
upvoted 0 times
...
...
Julio
2 months ago
I think the data engineer should use job bookmarks for incremental data processing.
upvoted 0 times
...
Tayna
2 months ago
I think the answer is C. Job bookmarks. That seems like the most relevant feature for incremental data processing in an ETL pipeline.
upvoted 0 times
Benedict
2 months ago
Yes, job bookmarks help in maintaining the state of the ETL job and processing only the new data for incremental updates.
upvoted 0 times
...
Pura
2 months ago
I think job bookmarks are essential for keeping track of the last processed data and ensuring only new data is ingested.
upvoted 0 times
...
Simona
2 months ago
I agree, using job bookmarks would be the best option for supporting incremental data processing in the ETL pipeline.
upvoted 0 times
...
...

Save Cancel