Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Microsoft AZ-204 Exam - Topic 16 Question 80 Discussion

Actual exam question for Microsoft's AZ-204 exam
Question #: 80
Topic #: 16
[All AZ-204 Questions]

You are developing a solution that will use a multi-partitioned Azure Cosmos DB database. You plan to use the latest Azure Cosmos DB SDK for development.

The solution must meet the following requirements:

Send insert and update operations to an Azure Blob storage account.

Process changes to all partitions immediately.

Allow parallelization of change processing.

You need to process the Azure Cosmos DB operations.

What are two possible ways to achieve this goal? Each correct answer presents a complete solution.

NOTE: Each correct selection is worth one point.

Show Suggested Answer Hide Answer
Suggested Answer: A

Contribute your Thoughts:

0/2000 characters
Mertie
4 months ago
I agree with C, Functions are super easy to set up!
upvoted 0 times
...
Lawanda
4 months ago
D is interesting, but can it really handle all partitions efficiently?
upvoted 0 times
...
Dick
4 months ago
A sounds good, but isn't it a bit complex for this use case?
upvoted 0 times
...
Ernie
4 months ago
I think B is better for scalability with Kubernetes.
upvoted 0 times
...
Shenika
5 months ago
Option C is a solid choice for real-time processing!
upvoted 0 times
...
Brigette
5 months ago
I believe the FeedIterator approach could work well for parallel processing, but I need to double-check how the FeedRange object functions in this context.
upvoted 0 times
...
Torie
5 months ago
I’m a bit confused about the difference between using an Azure App Service API and Azure Functions for this. Both seem viable, but I can't recall the specifics.
upvoted 0 times
...
France
5 months ago
I think using Azure Functions is a solid choice, especially for processing changes in real-time. It feels like a similar question we practiced last week.
upvoted 0 times
...
Charolette
5 months ago
I remember we discussed the change feed feature in class, but I'm not sure if the Azure Function with a trigger is the best option here.
upvoted 0 times
...
Jacklyn
5 months ago
This is a tricky one. I'm leaning towards option B with the Azure Kubernetes Service, but I'm not 100% sure that's the best approach. The change feed feature of the SDK seems like it could work, but I'm a little concerned about the complexity of setting up the Kubernetes environment. I might need to do some more research on the options.
upvoted 0 times
...
Dong
5 months ago
Hmm, I'm a bit unsure about this one. I'm trying to decide between options A and D. The change feed estimator in option A sounds promising, but the parallelization with the FeedRange object in option D also seems like a good approach. I'll need to think this through a bit more.
upvoted 0 times
...
Shantell
5 months ago
This seems like a pretty straightforward question. I think I'll go with option A - creating an Azure App Service API and using the change feed estimator to scale it. That should let me process the changes across all partitions immediately.
upvoted 0 times
...
Jodi
5 months ago
Okay, I've got a plan! I'm going to go with option D - creating an Azure Function that uses the FeedIterator and FeedRange objects to parallelize the change feed processing. That way I can handle the immediate processing requirement and the need for parallelization.
upvoted 0 times
...
Colene
5 months ago
I've seen this kind of problem before. I think the solution is to ensure the columns are all in one section of the form.
upvoted 0 times
...
Lanie
5 months ago
I'm pretty sure the unique identifier for SAML is the email id, so I'll go with option C.
upvoted 0 times
...
Mariann
6 months ago
I think option A sounds familiar, but I'm not entirely sure if the Login Agent can actually change Windows passwords.
upvoted 0 times
...
Lizbeth
10 months ago
Ah, the joys of Cosmos DB. I bet the developers at Microsoft spent months debating whether to call it 'change feed' or 'feed change'. Either way, it sounds like a delicious breakfast option.
upvoted 0 times
Ling
8 months ago
C) Create an Azure Function to use a trigger for Azure Cosmos DB. Configure the trigger to connect to the container.
upvoted 0 times
...
Lezlie
9 months ago
A) Create an Azure App Service API and implement the change feed estimator of the SDK. Scale the API by using multiple Azure App Service instances.
upvoted 0 times
...
...
Annelle
10 months ago
Wait, we have to choose two options? I thought this was a single-select question. *scratches head* Well, I guess I'll go with options C and D then. Double the points, double the fun!
upvoted 0 times
...
Golda
10 months ago
Option B with Azure Kubernetes Service sounds interesting, but I'm not sure if it's overkill for this use case. Seems like a lot of overhead just to process Cosmos DB changes.
upvoted 0 times
Shoshana
9 months ago
Option B might be overkill for this scenario. Maybe consider a simpler solution like option A.
upvoted 0 times
...
Shawn
9 months ago
B) Create a background job in an Azure Kubernetes Service and implement the change feed feature of the SDK.
upvoted 0 times
...
Annabelle
9 months ago
A) Create an Azure App Service API and implement the change feed estimator of the SDK. Scale the API by using multiple Azure App Service instances.
upvoted 0 times
...
...
Mabel
10 months ago
I like the idea of using Azure Functions in option D. The ability to parallelize the change feed processing across multiple functions is really appealing.
upvoted 0 times
Merlyn
9 months ago
I agree, Azure Functions in option D seem to be the best way to achieve the goal. It allows for parallel processing of the change feed.
upvoted 0 times
...
Lavelle
9 months ago
Option D sounds like a good choice. Using Azure Functions to parallelize the change feed processing is efficient.
upvoted 0 times
...
Erasmo
10 months ago
I agree, Option D seems like the most efficient way to handle the Azure Cosmos DB operations with parallelization.
upvoted 0 times
...
Kristeen
10 months ago
Option D sounds like a great choice. It allows for parallel processing of the change feed using multiple functions.
upvoted 0 times
...
...
Denny
11 months ago
Option C seems like the simplest and most straightforward way to achieve the requirements. Azure Functions with a Cosmos DB trigger will handle the change processing automatically.
upvoted 0 times
Leanna
9 months ago
Azure Functions with a Cosmos DB trigger definitely seems like the way to go for processing the operations efficiently.
upvoted 0 times
...
Dong
9 months ago
Creating an Azure Function with a trigger for Cosmos DB sounds efficient and easy to implement.
upvoted 0 times
...
Florinda
10 months ago
I agree, using Azure Functions with a Cosmos DB trigger simplifies the process and automates the change processing.
upvoted 0 times
...
Roselle
10 months ago
Option C seems like the simplest and most straightforward way to achieve the requirements. Azure Functions with a Cosmos DB trigger will handle the change processing automatically.
upvoted 0 times
...
...
Gracia
11 months ago
I'm not sure, I think option D could also work well with parallelizing the processing.
upvoted 0 times
...
Jaime
11 months ago
I agree with Yasuko. Option A seems like the best way to achieve the goal.
upvoted 0 times
...
Yasuko
11 months ago
I think option A is a good choice because it allows scaling with multiple instances.
upvoted 0 times
...

Save Cancel