Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Amazon Exam DAS-C01 Topic 2 Question 90 Discussion

Actual exam question for Amazon's DAS-C01 exam
Question #: 90
Topic #: 2
[All DAS-C01 Questions]

A company collects data from parking garages. Analysts have requested the ability to run reports in near real time about the number of vehicles in each garage.

The company wants to build an ingestion pipeline that loads the data into an Amazon Redshift cluster. The solution must alert operations personnel when the number of vehicles in a particular garage exceeds a specific threshold. The alerting query will use garage threshold values as a static reference. The threshold values are stored in

Amazon S3.

What is the MOST operationally efficient solution that meets these requirements?

Show Suggested Answer Hide Answer
Suggested Answer: B

Contribute your Thoughts:

Galen
11 months ago
I agree with User1, option D seems like a strategic approach to improve the COPY process by applying sharding.
upvoted 0 times
...
Gracia
11 months ago
I disagree, I believe option B would be more effective because splitting the files to match the number of slices in the Redshift cluster would optimize the COPY process.
upvoted 0 times
...
Tammara
11 months ago
I think option D would be the best solution for accelerating the COPY process.
upvoted 0 times
...
Peggy
12 months ago
That's true. Sharding based on DISTKEY columns could be worth considering.
upvoted 0 times
...
Melissa
12 months ago
But what about option D? Applying sharding could also improve the COPY process.
upvoted 0 times
...
Anastacia
12 months ago
I agree. Splitting the files to match the number of slices in the Redshift cluster makes sense.
upvoted 0 times
...
Peggy
12 months ago
I think option B would be the best solution.
upvoted 0 times
Lashanda
11 months ago
So, yeah, option B seems like the most practical solution for accelerating the COPY process.
upvoted 0 times
...
Lenora
12 months ago
Ultimately, that would lead to faster data loading into the Redshift cluster.
upvoted 0 times
...
Margery
12 months ago
And having the right number of files could improve parallelism during the COPY operation.
upvoted 0 times
...
Melita
12 months ago
It would ensure that the workload is evenly distributed across the cluster.
upvoted 0 times
...
Gerri
12 months ago
That could definitely help optimize the COPY process and make it more efficient.
upvoted 0 times
...
Candra
12 months ago
I agree, splitting the files based on the number of slices in the Redshift cluster makes sense.
upvoted 0 times
...
...

Save Cancel