Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon Exam Amazon-DEA-C01 Topic 2 Question 12 Discussion

Actual exam question for Amazon's Amazon-DEA-C01 exam
Question #: 12
Topic #: 2
[All Amazon-DEA-C01 Questions]

A data engineer is building an automated extract, transform, and load (ETL) ingestion pipeline by using AWS Glue. The pipeline ingests compressed files that are in an Amazon S3 bucket. The ingestion pipeline must support incremental data processing.

Which AWS Glue feature should the data engineer use to meet this requirement?

Show Suggested Answer Hide Answer
Suggested Answer: C

Problem Analysis:

The pipeline processes compressed files in S3 and must support incremental data processing.

AWS Glue features must facilitate tracking progress to avoid reprocessing the same data.

Key Considerations:

Incremental data processing requires tracking which files or partitions have already been processed.

The solution must be automated and efficient for large-scale ETL jobs.

Solution Analysis:

Option A: Workflows

Workflows organize and orchestrate multiple Glue jobs but do not track progress for incremental data processing.

Option B: Triggers

Triggers initiate Glue jobs based on a schedule or events but do not track which data has been processed.

Option C: Job Bookmarks

Job bookmarks track the state of the data that has been processed, enabling incremental processing.

Automatically skip files or partitions that were previously processed in Glue jobs.

Option D: Classifiers

Classifiers determine the schema of incoming data but do not handle incremental processing.

Final Recommendation:

Job bookmarks are specifically designed to enable incremental data processing in AWS Glue ETL pipelines.


AWS Glue Job Bookmarks Documentation

AWS Glue ETL Features

Contribute your Thoughts:

Gracia
5 days ago
I believe triggers could also be used for incremental data processing in the AWS Glue pipeline.
upvoted 0 times
...
Ernie
5 days ago
Haha, I bet the data engineer is wishing they had a 'Lazy' feature to just do all the work for them. But C. Job bookmarks is probably the way to go here.
upvoted 0 times
...
Cecilia
7 days ago
I'm going to go with C. Job bookmarks. Seems like the perfect tool for keeping track of where the pipeline left off and picking up from there on the next run.
upvoted 0 times
...
Martina
11 days ago
I agree with Julio, job bookmarks keep track of processed data and support incremental processing.
upvoted 0 times
...
Mauricio
11 days ago
Hmm, I'm torn between B. Triggers and C. Job bookmarks. Triggers could be used to kick off the pipeline based on new file arrivals, but bookmarks might be better for actually tracking the incremental progress.
upvoted 0 times
Lorrine
11 hours ago
I think you're right, C. Job bookmarks would be better for tracking incremental progress.
upvoted 0 times
...
...
Julio
12 days ago
I think the data engineer should use job bookmarks for incremental data processing.
upvoted 0 times
...
Tayna
15 days ago
I think the answer is C. Job bookmarks. That seems like the most relevant feature for incremental data processing in an ETL pipeline.
upvoted 0 times
...

Save Cancel