BlackFriday 2024! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Professional Cloud Architect Topic 4 Question 87 Discussion

Actual exam question for Google's Professional Cloud Architect exam
Question #: 87
Topic #: 4
[All Professional Cloud Architect Questions]

Your company has an application that is running on multiple instances of Compute Engine. It generates 1 TB per day of logs. For compliance reasons, the logs need to be kept for at least two years. The logs need to be available for active query for 30 days. After that, they just need to be retained for audit purposes. You want to implement a storage solution that is compliant, minimizes costs, and follows Google-recommended practices. What should you do?

Show Suggested Answer Hide Answer
Suggested Answer: B

The practice for managing logs generated on Compute Engine on Google Cloud is to install the Cloud Logging agent and send them to Cloud Logging.

The sent logs will be aggregated into a Cloud Logging sink and exported to Cloud Storage.

The reason for using Cloud Storage as the destination for the logs is that the requirement in question requires setting up a lifecycle based on the storage period.

In this case, the log will be used for active queries for 30 days after it is saved, but after that, it needs to be stored for a longer period of time for auditing purposes.

If the data is to be used for active queries, we can use BigQuery's Cloud Storage data query feature and move the data past 30 days to Coldline to build a cost-optimal solution.

Therefore, the correct answer is as follows

1. Install the Cloud Logging agent on all instances.

Create a sync that exports the logs to the region's Cloud Storage bucket.

3. Create an Object Lifecycle rule to move the files to the Coldline Cloud Storage bucket after one month. 4.

4. set up a bucket-level retention policy using bucket locking.'


Contribute your Thoughts:

Maurine
3 months ago
If I had to choose, I'd go with Option B. It's the only one that mentions Coldline Storage, which is perfect for long-term log retention.
upvoted 0 times
...
Rozella
3 months ago
Option D has too many moving parts. Running a daily cron job on all instances seems like a maintenance nightmare waiting to happen.
upvoted 0 times
Nan
2 months ago
I see your point, but I still think Option B is the most cost-effective and compliant solution.
upvoted 0 times
...
Kimberely
2 months ago
Option A also seems like a good option. Exporting logs to a partitioned BigQuery table is efficient.
upvoted 0 times
...
Bettyann
3 months ago
I agree, Option B simplifies the process and reduces the risk of maintenance issues.
upvoted 0 times
...
Salome
3 months ago
I think Option B is the best choice. It automates the process and moves logs to a cheaper storage solution after 30 days.
upvoted 0 times
...
...
Kristin
4 months ago
Option A is the 'Google-recommended' solution, so it's gotta be the right answer, right? I mean, they know what they're doing... right?
upvoted 0 times
Eleonora
2 months ago
A) 1. Install the Cloud Ops agent on all instances. 2. Create a sink to export logs into a partitioned BigQuery table. 3. Set a time_partitioning_expiration of 30 days.
upvoted 0 times
...
Cecily
3 months ago
B) 1. Install the Cloud Ops agent on all instances. 2. Create a sink to export logs into a regional Cloud Storage bucket. 3. Create an Object Lifecycle rule to move files into a Coldline Cloud Storage bucket after one month. 4. Configure a retention policy at the bucket level to create a lock.
upvoted 0 times
...
Shannan
3 months ago
A) 1. Install the Cloud Ops agent on all instances. 2. Create a sink to export logs into a partitioned BigQuery table. 3. Set a time_partitioning_expiration of 30 days.
upvoted 0 times
...
Cecily
3 months ago
A) 1. Install the Cloud Ops agent on all instances. 2. Create a sink to export logs into a partitioned BigQuery table. 3. Set a time_partitioning_expiration of 30 days.
upvoted 0 times
...
Arlyne
3 months ago
B) 1. Install the Cloud Ops agent on all instances. 2. Create a sink to export logs into a regional Cloud Storage bucket. 3. Create an Object Lifecycle rule to move files into a Coldline Cloud Storage bucket after one month. 4. Configure a retention policy at the bucket level to create a lock.
upvoted 0 times
...
Alyce
3 months ago
A) 1. Install the Cloud Ops agent on all instances. 2. Create a sink to export logs into a partitioned BigQuery table. 3. Set a time_partitioning_expiration of 30 days.
upvoted 0 times
...
...
Trinidad
4 months ago
I'm not sure, I think option A could also work well with exporting logs into a partitioned BigQuery table for active query.
upvoted 0 times
...
Leonor
4 months ago
I agree with Matthew. Option B seems to be the most cost-effective solution while still following Google-recommended practices.
upvoted 0 times
...
Brett
4 months ago
Option B seems like the most comprehensive and compliant solution. The use of a regional Cloud Storage bucket, lifecycle rules, and a retention policy covers all the requirements.
upvoted 0 times
Ruthann
3 months ago
Option B definitely seems like the most comprehensive solution for this scenario.
upvoted 0 times
...
Kris
3 months ago
It's important to have a retention policy in place for compliance purposes.
upvoted 0 times
...
Daniela
3 months ago
I agree. Using a regional Cloud Storage bucket and setting up lifecycle rules is a good approach.
upvoted 0 times
...
Helga
4 months ago
Option B does seem like the best choice. It covers all the requirements.
upvoted 0 times
...
...
Matthew
4 months ago
I think option B is the best choice because it involves moving logs to a Coldline Cloud Storage bucket after one month to minimize costs.
upvoted 0 times
...

Save Cancel