BlackFriday 2024! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon Exam DVA-C02 Topic 7 Question 36 Discussion

Actual exam question for Amazon's DVA-C02 exam
Question #: 36
Topic #: 7
[All DVA-C02 Questions]

A company built an online event platform For each event the company organizes quizzes and generates leaderboards that are based on the quiz scores. The company stores the leaderboard data in Amazon DynamoDB and retains the data for 30 days after an event is complete The company then uses a scheduled job to delete the old leaderboard data

The DynamoDB table is configured with a fixed write capacity. During the months when many events occur, the DynamoDB write API requests are throttled when the scheduled delete job runs.

A developer must create a long-term solution that deletes the old leaderboard data and optimizes write throughput

Which solution meets these requirements?

Show Suggested Answer Hide Answer
Suggested Answer: A

DynamoDB TTL (Time-to-Live):A native feature that automatically deletes items after a specified expiration time.

Efficiency:Eliminates the need for scheduled deletion jobs, optimizing write throughput by avoiding potential throttling conflicts.

Seamless Integration:TTL works directly within DynamoDB, requiring minimal development overhead.


DynamoDB TTL Documentation:https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html

Contribute your Thoughts:

Bea
26 days ago
Option A is the clear winner here. TTL is built for this kind of use case. Set it and forget it, baby!
upvoted 0 times
Gracia
4 days ago
Option A is the clear winner here. TTL is built for this kind of use case. Set it and forget it, baby!
upvoted 0 times
...
...
Norah
29 days ago
D? Really? Increasing write capacity just to accommodate a scheduled delete job? That's like using a sledgehammer to crack a nut.
upvoted 0 times
...
Johnna
1 months ago
Hmm, I'm torn between B and C. Why not just use a serverless function triggered by a CloudWatch event? That's a simple yet effective solution.
upvoted 0 times
Angelica
7 days ago
B: I agree, it would help optimize write throughput and ensure old data is deleted in a timely manner.
upvoted 0 times
...
Bok
23 days ago
A: I think using DynamoDB Streams to schedule and delete the leaderboard data is the best option.
upvoted 0 times
...
...
Penney
1 months ago
I'd say C is the best choice. Step Functions can handle the scheduling and orchestration of the deletion process more robustly.
upvoted 0 times
Nu
26 days ago
But with DynamoDB Streams, you can have more control over the deletion process and ensure it runs smoothly.
upvoted 0 times
...
Thurman
29 days ago
I think A could work too. Setting a TTL attribute would automatically delete the old data after 30 days.
upvoted 0 times
...
...
Rosendo
1 months ago
I'm not sure about option B) or C), but setting a higher write capacity with option D) could also work.
upvoted 0 times
...
Talia
1 months ago
Option B is the way to go. DynamoDB Streams make it easy to trigger a function to delete the old data without affecting write throughput.
upvoted 0 times
Valda
21 days ago
That makes sense. It's important to optimize write throughput while deleting old data.
upvoted 0 times
...
Roselle
24 days ago
I agree, using DynamoDB Streams sounds like the best solution for this scenario.
upvoted 0 times
...
Matthew
28 days ago
Option B is the way to go. DynamoDB Streams make it easy to trigger a function to delete the old data without affecting write throughput.
upvoted 0 times
...
...
Jaclyn
2 months ago
I agree with Verona. Using TTL would automatically delete the old data and optimize write throughput.
upvoted 0 times
...
Verona
2 months ago
I think option A) Configure a TTL attribute for the leaderboard data would be a good solution.
upvoted 0 times
...

Save Cancel