You recently deployed several data processing jobs into your Cloud Composer 2 environment. You notice that some tasks are failing in Apache Airflow. On the monitoring dashboard, you see an increase in the total workers' memory usage, and there were worker pod evictions. You need to resolve these errors. What should you do?
Choose 2 answers
To resolve issues related to increased memory usage and worker pod evictions in your Cloud Composer 2 environment, the following steps are recommended:
Increase Memory Available to Airflow Workers:
By increasing the memory allocated to Airflow workers, you can handle more memory-intensive tasks, reducing the likelihood of pod evictions due to memory limits.
Increase Maximum Number of Workers and Reduce Worker Concurrency:
Increasing the number of workers allows the workload to be distributed across more pods, preventing any single pod from becoming overwhelmed.
Reducing worker concurrency limits the number of tasks that each worker can handle simultaneously, thereby lowering the memory consumption per worker.
Steps to Implement:
Increase Worker Memory:
Modify the configuration settings in Cloud Composer to allocate more memory to Airflow workers. This can be done through the environment configuration settings.
Adjust Worker and Concurrency Settings:
Increase the maximum number of workers in the Cloud Composer environment settings.
Reduce the concurrency setting for Airflow workers to ensure that each worker handles fewer tasks at a time, thus consuming less memory per worker.
Cloud Composer Worker Configuration
Scaling Airflow Workers
Rodrigo
15 days agoLai
16 days agoNovella
18 days agoCatalina
1 months agoJacinta
1 months agoPatria
2 months agoLuann
2 months agoDevora
11 days agoCatina
29 days agoFrancoise
1 months agoBrice
1 months ago