Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Snowflake Exam DEA-C01 Topic 4 Question 41 Discussion

Actual exam question for Snowflake's DEA-C01 exam
Question #: 41
Topic #: 4
[All DEA-C01 Questions]

A Data Engineer is working on a continuous data pipeline which receives data from Amazon Kinesis Firehose and loads the data into a staging table which will later be used in the data transformation process The average file size is 300-500 MB.

The Engineer needs to ensure that Snowpipe is performant while minimizing costs.

How can this be achieved?

Show Suggested Answer Hide Answer
Suggested Answer: B

This option is the best way to ensure that Snowpipe is performant while minimizing costs. By splitting the files before loading them, the Data Engineer can reduce the size of each file and increase the parallelism of loading. By setting the SIZE_LIMIT option to 250 MB, the Data Engineer can specify the maximum file size that can be loaded by Snowpipe, which can prevent performance degradation or errors due to large files. The other options are not optimal because:

Increasing the size of the virtual warehouse used by Snowpipe will increase the performance but also increase the costs, as larger warehouses consume more credits per hour.

Changing the file compression size and increasing the frequency of the Snowpipe loads will not have much impact on performance or costs, as Snowpipe already supports various compression formats and automatically loads files as soon as they are detected in the stage.

Decreasing the buffer size to trigger delivery of files sized between 100 to 250 MB in Kinesis Firehose will not affect Snowpipe performance or costs, as Snowpipe does not depend on Kinesis Firehose buffer size but rather on its own SIZE_LIMIT option.


Contribute your Thoughts:

Jean
8 days ago
Decreasing the buffer size in Kinesis Firehose? That's a creative solution, but I'm not sure it's going to play nice with the data transformation process later on.
upvoted 0 times
...
Crista
9 days ago
Changing the file compression and load frequency? Interesting idea, but I'm not sure if that's going to be enough to handle those massive 300-500 MB files.
upvoted 0 times
...
Zona
10 days ago
I prefer option C, changing the file compression size can also improve performance.
upvoted 0 times
...
Kattie
10 days ago
Splitting the files before loading them could work, but then you'd have to manage all those separate files. Sounds like a lot of extra work to me.
upvoted 0 times
...
Nickole
11 days ago
Hmm, increasing the virtual warehouse size sounds like the obvious choice, but I wonder if that's really the best way to optimize performance and costs. Seems like a bit of a brute force approach.
upvoted 0 times
...
Allene
11 days ago
I agree with Rex, splitting the files will help optimize Snowpipe performance.
upvoted 0 times
...
Rex
16 days ago
I think option B is the best choice.
upvoted 0 times
...

Save Cancel