New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Professional Data Engineer Exam - Topic 3 Question 94 Discussion

Actual exam question for Google's Professional Data Engineer exam
Question #: 94
Topic #: 3
[All Professional Data Engineer Questions]

You recently deployed several data processing jobs into your Cloud Composer 2 environment. You notice that some tasks are failing in Apache Airflow. On the monitoring dashboard, you see an increase in the total workers' memory usage, and there were worker pod evictions. You need to resolve these errors. What should you do?

Choose 2 answers

Show Suggested Answer Hide Answer
Suggested Answer: B, C

To resolve issues related to increased memory usage and worker pod evictions in your Cloud Composer 2 environment, the following steps are recommended:

Increase Memory Available to Airflow Workers:

By increasing the memory allocated to Airflow workers, you can handle more memory-intensive tasks, reducing the likelihood of pod evictions due to memory limits.

Increase Maximum Number of Workers and Reduce Worker Concurrency:

Increasing the number of workers allows the workload to be distributed across more pods, preventing any single pod from becoming overwhelmed.

Reducing worker concurrency limits the number of tasks that each worker can handle simultaneously, thereby lowering the memory consumption per worker.

Steps to Implement:

Increase Worker Memory:

Modify the configuration settings in Cloud Composer to allocate more memory to Airflow workers. This can be done through the environment configuration settings.

Adjust Worker and Concurrency Settings:

Increase the maximum number of workers in the Cloud Composer environment settings.

Reduce the concurrency setting for Airflow workers to ensure that each worker handles fewer tasks at a time, thus consuming less memory per worker.


Cloud Composer Worker Configuration

Scaling Airflow Workers

Contribute your Thoughts:

0/2000 characters
Charlene
3 months ago
Wait, are worker pod evictions really that common?
upvoted 0 times
...
Brande
3 months ago
Increasing worker concurrency could help with the evictions.
upvoted 0 times
...
Lorenza
3 months ago
Not sure about the DAG parsing interval being the issue here.
upvoted 0 times
...
Georgiann
4 months ago
I think increasing the environment size is a good move too!
upvoted 0 times
...
Stephanie
4 months ago
Definitely increase the memory for the Airflow workers.
upvoted 0 times
...
Vernice
4 months ago
I’m a bit confused about the worker concurrency part. Should we really reduce it if we’re also increasing the number of workers?
upvoted 0 times
...
Marvel
4 months ago
I feel like we had a practice question about scaling the environment size. Increasing it from medium to large could be a good move.
upvoted 0 times
...
Leah
5 months ago
I'm not entirely sure, but I think increasing the DAG file parsing interval might not really help with memory issues.
upvoted 0 times
...
Thurman
5 months ago
I remember we discussed increasing the memory for Airflow workers in class. That seems like a solid option here.
upvoted 0 times
...
Malinda
5 months ago
Increasing the environment size from medium to large could be an easy fix, but I want to make sure I understand the underlying issues first before jumping to that solution.
upvoted 0 times
...
Dominga
5 months ago
Increasing the maximum number of workers and reducing concurrency might also help distribute the load and free up resources. I'll need to weigh the pros and cons of that approach.
upvoted 0 times
...
Joaquin
5 months ago
Increasing the memory available to the workers sounds like a good place to start. If the tasks are consuming too much memory, that could be the root cause of the failures and pod evictions.
upvoted 0 times
...
Jesusita
5 months ago
Hmm, increasing the DAG file parsing interval doesn't seem directly related to the memory usage problem. I'm leaning more towards options that address the worker resource constraints.
upvoted 0 times
...
Trinidad
5 months ago
This seems like a tricky one. I'll need to carefully consider the options and think through the potential causes of the memory issues.
upvoted 0 times
...
Leigha
5 months ago
This is a good opportunity to showcase my knowledge of GRUB. I'll review the options and try to eliminate the false statements.
upvoted 0 times
...
Rodrigo
1 year ago
B and C are the clear winners here. Gotta give those workers some more breathing room and add some reinforcements. Wait, why would we want to increase the memory for the Airflow triggerer (D)? That's like trying to fit an elephant in a phone booth.
upvoted 0 times
...
Lai
1 year ago
I'd say B and E are the way to go. Boost that worker memory and scale up the whole environment. Can't have those tasks failing on our watch! Although, increasing the DAG parsing interval (A) might be an interesting experiment, just to see how long it takes for everything to grind to a halt.
upvoted 0 times
...
Novella
1 year ago
Definitely go for B and C. Increasing the memory and worker capacity should help handle the increased processing load. I wonder if the engineers had a bet going on how many worker pods they could evict before someone noticed.
upvoted 0 times
Luisa
1 year ago
E) Increase the Cloud Composer 2 environment size from medium to large.
upvoted 0 times
...
Diane
1 year ago
That sounds like a good plan. Hopefully, it will help stabilize the data processing jobs.
upvoted 0 times
...
Alaine
1 year ago
C) Increase the maximum number of workers and reduce worker concurrency.
upvoted 0 times
...
Mitsue
1 year ago
B) Increase the memory available to the Airflow workers.
upvoted 0 times
...
...
Catalina
1 year ago
I also think we should increase the Cloud Composer 2 environment size from medium to large to handle the increased workload.
upvoted 0 times
...
Jacinta
1 year ago
I agree with Patria. That could help resolve the memory usage issues.
upvoted 0 times
...
Patria
1 year ago
I think we should increase the memory available to the Airflow workers.
upvoted 0 times
...
Luann
1 year ago
Hmm, I think increasing the memory available to the Airflow workers (B) and the maximum number of workers (C) would be a good place to start. We don't want those poor workers to get evicted like a bad tenant!
upvoted 0 times
Devora
1 year ago
Sounds like a plan. Let's monitor the environment after making the changes to ensure everything runs smoothly.
upvoted 0 times
...
Catina
1 year ago
Great suggestions! Let's make those adjustments to improve the performance of our data processing jobs.
upvoted 0 times
...
Francoise
1 year ago
Yes, and increasing the maximum number of workers and reducing worker concurrency (C) can also prevent worker pod evictions.
upvoted 0 times
...
Brice
1 year ago
I agree, increasing the memory available to the Airflow workers (B) should help with the memory usage.
upvoted 0 times
...
...

Save Cancel