Cyber Monday 2024! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Professional Data Engineer Topic 3 Question 94 Discussion

Actual exam question for Google's Professional Data Engineer exam
Question #: 94
Topic #: 3
[All Professional Data Engineer Questions]

You recently deployed several data processing jobs into your Cloud Composer 2 environment. You notice that some tasks are failing in Apache Airflow. On the monitoring dashboard, you see an increase in the total workers' memory usage, and there were worker pod evictions. You need to resolve these errors. What should you do?

Choose 2 answers

Show Suggested Answer Hide Answer
Suggested Answer: B, C

To resolve issues related to increased memory usage and worker pod evictions in your Cloud Composer 2 environment, the following steps are recommended:

Increase Memory Available to Airflow Workers:

By increasing the memory allocated to Airflow workers, you can handle more memory-intensive tasks, reducing the likelihood of pod evictions due to memory limits.

Increase Maximum Number of Workers and Reduce Worker Concurrency:

Increasing the number of workers allows the workload to be distributed across more pods, preventing any single pod from becoming overwhelmed.

Reducing worker concurrency limits the number of tasks that each worker can handle simultaneously, thereby lowering the memory consumption per worker.

Steps to Implement:

Increase Worker Memory:

Modify the configuration settings in Cloud Composer to allocate more memory to Airflow workers. This can be done through the environment configuration settings.

Adjust Worker and Concurrency Settings:

Increase the maximum number of workers in the Cloud Composer environment settings.

Reduce the concurrency setting for Airflow workers to ensure that each worker handles fewer tasks at a time, thus consuming less memory per worker.


Cloud Composer Worker Configuration

Scaling Airflow Workers

Contribute your Thoughts:

Rodrigo
2 months ago
B and C are the clear winners here. Gotta give those workers some more breathing room and add some reinforcements. Wait, why would we want to increase the memory for the Airflow triggerer (D)? That's like trying to fit an elephant in a phone booth.
upvoted 0 times
...
Lai
2 months ago
I'd say B and E are the way to go. Boost that worker memory and scale up the whole environment. Can't have those tasks failing on our watch! Although, increasing the DAG parsing interval (A) might be an interesting experiment, just to see how long it takes for everything to grind to a halt.
upvoted 0 times
...
Novella
2 months ago
Definitely go for B and C. Increasing the memory and worker capacity should help handle the increased processing load. I wonder if the engineers had a bet going on how many worker pods they could evict before someone noticed.
upvoted 0 times
Luisa
6 days ago
E) Increase the Cloud Composer 2 environment size from medium to large.
upvoted 0 times
...
Diane
7 days ago
That sounds like a good plan. Hopefully, it will help stabilize the data processing jobs.
upvoted 0 times
...
Alaine
23 days ago
C) Increase the maximum number of workers and reduce worker concurrency.
upvoted 0 times
...
Mitsue
1 months ago
B) Increase the memory available to the Airflow workers.
upvoted 0 times
...
...
Catalina
2 months ago
I also think we should increase the Cloud Composer 2 environment size from medium to large to handle the increased workload.
upvoted 0 times
...
Jacinta
2 months ago
I agree with Patria. That could help resolve the memory usage issues.
upvoted 0 times
...
Patria
3 months ago
I think we should increase the memory available to the Airflow workers.
upvoted 0 times
...
Luann
3 months ago
Hmm, I think increasing the memory available to the Airflow workers (B) and the maximum number of workers (C) would be a good place to start. We don't want those poor workers to get evicted like a bad tenant!
upvoted 0 times
Devora
1 months ago
Sounds like a plan. Let's monitor the environment after making the changes to ensure everything runs smoothly.
upvoted 0 times
...
Catina
2 months ago
Great suggestions! Let's make those adjustments to improve the performance of our data processing jobs.
upvoted 0 times
...
Francoise
2 months ago
Yes, and increasing the maximum number of workers and reducing worker concurrency (C) can also prevent worker pod evictions.
upvoted 0 times
...
Brice
2 months ago
I agree, increasing the memory available to the Airflow workers (B) should help with the memory usage.
upvoted 0 times
...
...

Save Cancel