Black Friday Mega Deal! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Professional Data Engineer Exam - Topic 1 Question 101 Discussion

Actual exam question for Google's Professional Data Engineer exam
Question #: 101
Topic #: 1
[All Professional Data Engineer Questions]

Your car factory is pushing machine measurements as messages into a Pub/Sub topic in your Google Cloud project. A Dataflow streaming job. that you wrote with the Apache Beam SDK, reads these messages, sends acknowledgment lo Pub/Sub. applies some custom business logic in a Doffs instance, and writes the result to BigQuery. You want to ensure that if your business logic fails on a message, the message will be sent to a Pub/Sub topic that you want to monitor for alerting purposes. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: C

To ensure that messages failing to process in your Dataflow job are sent to a Pub/Sub topic for monitoring and alerting, the best approach is to use Pub/Sub's dead-letter topic feature. Here's why option C is the best choice:

Dead-Letter Topic:

Pub/Sub's dead-letter topic feature allows messages that fail to be processed successfully to be redirected to a specified topic. This ensures that these messages are not lost and can be reviewed for debugging and alerting purposes.

Monitoring and Alerting:

By specifying a new Pub/Sub topic as the dead-letter topic, you can use Cloud Monitoring to track metrics such as subscription/dead_letter_message_count, providing visibility into the number of failed messages.

This allows you to set up alerts based on these metrics to notify the appropriate teams when failures occur.

Steps to Implement:

Enable Dead-Letter Topic:

Configure your Pub/Sub pull subscription to enable dead lettering and specify the new Pub/Sub topic for dead-letter messages.

Set Up Monitoring:

Use Cloud Monitoring to monitor the subscription/dead_letter_message_count metric on your pull subscription.

Configure alerts based on this metric to notify the team of any processing failures.


Pub/Sub Dead Letter Policy

Cloud Monitoring with Pub/Sub

Contribute your Thoughts:

0/2000 characters
Odelia
9 days ago
I think C is the better choice for dead lettering.
upvoted 0 times
...
Ilona
15 days ago
Option A sounds solid for handling failures.
upvoted 0 times
...
Milly
20 days ago
Option B seems off to me; retaining acknowledged messages doesn't really address the failure scenario we need to handle.
upvoted 0 times
...
Stefan
26 days ago
I feel like we practiced something similar with monitoring metrics in Cloud Monitoring, but I can't recall the specifics for each option.
upvoted 0 times
...
Junita
1 month ago
I'm not entirely sure about the dead lettering concept, but I think option C might be the right approach for tracking failures.
upvoted 0 times
...
Mila
1 month ago
I remember we discussed using side outputs in Dataflow for handling failed messages, so option A sounds familiar.
upvoted 0 times
...
Kristofer
1 month ago
This seems like a straightforward problem, but I want to make sure I understand the details correctly. I think I'll go with option A and use an exception handling block to push the failed messages to a new Pub/Sub topic. That way, I can monitor the topic and get alerts if there are any issues.
upvoted 0 times
...
Tamra
1 month ago
Okay, I've got it! The key is to use a side output in the Dataflow job to send the failed messages to a new Pub/Sub topic, and then monitor that topic using the num_jnacked_messages_by_region metric. That way, we can easily identify and address any issues with the business logic.
upvoted 0 times
...
Daniel
1 month ago
Hmm, I'm a bit confused about the different options here. I'm not sure if enabling dead lettering or using a snapshot would be the best approach for this use case. I'll need to think through the pros and cons of each option.
upvoted 0 times
...
Rickie
1 month ago
This question seems straightforward. I think the best approach is to use an exception handling block in the Dataflow job to push failed messages to a new Pub/Sub topic, and then monitor that topic using Cloud Monitoring.
upvoted 0 times
...
Malcom
9 months ago
Haha, I bet the folks who wrote this question thought they were being clever. But I'm going with C - dead lettering is the way to go, in my opinion.
upvoted 0 times
Christene
8 months ago
Dead lettering seems like the most reliable option to ensure that no messages are lost in case of failures.
upvoted 0 times
...
Leatha
8 months ago
I would go with C as well. It's always better to have a designated place for failed messages.
upvoted 0 times
...
Doyle
8 months ago
Yeah, I agree. It's important to have a mechanism in place to deal with messages that couldn't be processed.
upvoted 0 times
...
Cordelia
8 months ago
I think C is the best option too. Dead lettering is a good way to handle failed messages.
upvoted 0 times
...
...
Ligia
9 months ago
I'm torn between A and C. Both seem viable, but I think C might be a bit more straightforward and easier to implement.
upvoted 0 times
Reena
8 months ago
True, A could be a good solution as well. It's a tough decision.
upvoted 0 times
...
Alease
8 months ago
But A also seems like a good choice, with the side output feature.
upvoted 0 times
...
Temeka
8 months ago
I agree, C does seem like a straightforward option.
upvoted 0 times
...
Janessa
8 months ago
I think C might be the way to go. It sounds simpler.
upvoted 0 times
...
...
Danica
10 months ago
C looks good to me. Dead lettering is a common approach for handling failed messages, and monitoring the dead letter topic is a smart move.
upvoted 0 times
Cecil
9 months ago
I agree with A. It's important to handle failed messages at the code level to ensure proper processing and monitoring.
upvoted 0 times
...
Deeanna
9 months ago
I think A might be a better option. Using exception handling in the Data Flow's DoFn code seems like a more direct way to handle failed messages.
upvoted 0 times
...
Rodrigo
9 months ago
C looks good to me. Dead lettering is a common approach for handling failed messages, and monitoring the dead letter topic is a smart move.
upvoted 0 times
...
...
Deandrea
10 months ago
I'm not sure, but maybe enabling dead lettering in the Pub/Sub pull subscription could also be a good option for monitoring failed messages.
upvoted 0 times
...
Ria
10 months ago
I agree with Tequila. It's important to monitor the messages that fail to be transformed and handle them accordingly.
upvoted 0 times
...
Letha
10 months ago
Option A seems like the way to go. I like the idea of using a side output to handle failed messages and monitor them separately.
upvoted 0 times
Tu
9 months ago
User 3: I agree. Monitoring and handling failed messages in a separate topic can help with troubleshooting and alerting.
upvoted 0 times
...
Ezekiel
10 months ago
User 2: Yes, that sounds like a good approach. It's important to have a way to track and handle failed messages effectively.
upvoted 0 times
...
Shay
10 months ago
User 1: Option A seems like the way to go. I like the idea of using a side output to handle failed messages and monitor them separately.
upvoted 0 times
...
...
Tequila
10 months ago
I think we should use an exception handling block in the Data Flow's DoFn code to push failed messages to a new Pub/Sub topic for monitoring.
upvoted 0 times
...

Save Cancel