BlackFriday 2024! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Exam Professional Cloud Architect Topic 6 Question 94 Discussion

Actual exam question for Google's Professional Cloud Architect exam
Question #: 94
Topic #: 6
[All Professional Cloud Architect Questions]

Your company has a stateless web API that performs scientific calculations. The web API runs on a single Google Kubernetes Engine (GKE) cluster. The cluster is currently deployed in us-central1. Your company has expanded to offer your API to customers in Asi

a. You want to reduce the latency for the users in Asia. What should you do?

Show Suggested Answer Hide Answer

Contribute your Thoughts:

Mike
26 days ago
I'm going to have to go with B. Gotta love that good old-fashioned multi-region architecture, even if it's not as fancy as the kubemci solution.
upvoted 0 times
Eugene
6 days ago
User 3: Agreed, having multiple clusters in different regions definitely helps with performance optimization.
upvoted 0 times
...
Zita
11 days ago
Yeah, B sounds like a solid choice. Setting up a second GKE cluster in asia-southeast1 makes sense for reducing latency in Asia.
upvoted 0 times
...
Altha
13 days ago
I'm going to have to go with B. Gotta love that good old-fashioned multi-region architecture, even if it's not as fancy as the kubemci solution.
upvoted 0 times
...
...
Myrtie
28 days ago
A is the obvious choice here. Who needs multiple clusters and load balancers when you can just enable Cloud CDN and let Google handle the global distribution for you?
upvoted 0 times
...
Junita
1 months ago
Haha, C is a classic 'throw more hardware at it' answer. As if increasing the resources on a single cluster is going to magically reduce latency for users halfway across the world.
upvoted 0 times
...
Renea
1 months ago
D looks good to me. Using kubemci to create a global HTTP(s) load balancer sounds like a more elegant solution than managing multiple load balancers.
upvoted 0 times
Edelmira
8 days ago
Agreed, it would definitely help reduce latency for users in Asia.
upvoted 0 times
...
Ricarda
11 days ago
Yeah, using kubemci to create a global HTTP(s) load balancer seems like a smart choice.
upvoted 0 times
...
Cristy
21 days ago
I think D is the best option.
upvoted 0 times
...
...
Edelmira
2 months ago
I think increasing the memory and CPU allocated to the application in the cluster could also help reduce latency for users in Asia.
upvoted 0 times
...
Lyndia
2 months ago
I disagree, I believe creating a second GKE cluster in asia-southeast1 and exposing both APIs using a Service of type Load Balancer would be more effective.
upvoted 0 times
...
Antione
2 months ago
I think the answer is B. Creating a second GKE cluster in Asia and using a Load Balancer Service to expose both APIs seems like the best way to reduce latency for the Asian users.
upvoted 0 times
Tasia
11 days ago
D
upvoted 0 times
...
Wilburn
19 days ago
B is the correct answer. By creating a second GKE cluster in Asia and using a Load Balancer Service, you can reduce latency for Asian users.
upvoted 0 times
...
Annmarie
20 days ago
B
upvoted 0 times
...
Malinda
1 months ago
A
upvoted 0 times
...
Alysa
1 months ago
B) Create a second GKE cluster in asia-southeast1, and expose both API's using a Service of type Load Balancer. Add the public Ips to the Cloud DNS zone
upvoted 0 times
...
Margot
2 months ago
A) Use a global HTTP(s) load balancer with Cloud CDN enabled
upvoted 0 times
...
...
Casie
2 months ago
I think we should use a global HTTP(s) load balancer with Cloud CDN enabled to reduce latency for users in Asia.
upvoted 0 times
...

Save Cancel