You are developing a data engineering solution for a company. The solution will store a large set of key-value pair data by using Microsoft Azure Cosmos DB
The solution has the following requirements:
* Data must be partitioned into multiple containers.
* Data containers must be configured separately.
* Data must be accessible from applications hosted around the world.
* The solution must minimize latency.
You need to provision Azure Cosmos DB
Scale read and write throughput globally. You can enable every region to be writable and elastically scale reads and writes all around the world. The throughput that your application configures on an Azure Cosmos database or a container is guaranteed to be delivered across all regions associated with your Azure Cosmos account. The provisioned throughput is guaranteed up by financially backed SLAs.
References:
https://docs.microsoft.com/en-us/azure/cosmos-db/distribute-data-globally
Currently there are no comments in this discussion, be the first to comment!