Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Snowflake Exam ARA-C01 Topic 3 Question 48 Discussion

Actual exam question for Snowflake's ARA-C01 exam
Question #: 48
Topic #: 3
[All ARA-C01 Questions]

A company is trying to Ingest 10 TB of CSV data into a Snowflake table using Snowpipe as part of Its migration from a legacy database platform. The records need to be ingested in the MOST performant and cost-effective way.

How can these requirements be met?

Show Suggested Answer Hide Answer
Suggested Answer: D

For ingesting a large volume of CSV data into Snowflake using Snowpipe, especially for a substantial amount like 10 TB, the on error = SKIP_FILE option in the COPY INTO command can be highly effective. This approach allows Snowpipe to skip over files that cause errors during the ingestion process, thereby not halting or significantly slowing down the overall data load. It helps in maintaining performance and cost-effectiveness by avoiding the reprocessing of problematic files and continuing with the ingestion of other data.


Contribute your Thoughts:

Larae
3 days ago
Ah, the age-old debate: to continue or to skip? I say, why not both? Use 'on error = SKIP_FILE' and then go out for a nice, relaxing purge. Ah, the life of a data engineer.
upvoted 0 times
...
Vernell
11 days ago
I think using on error = SKIP_FILE would be the best option to skip files with errors and continue the ingestion process smoothly.
upvoted 0 times
...
Erick
12 days ago
Hmm, I'm not sure about these options. 'FURGE = FALSE'? Is that even a real Snowflake command? I think I'll go with option D, just to be safe.
upvoted 0 times
Leonor
2 days ago
Option C is definitely not a real Snowflake command. I would go with option D as well.
upvoted 0 times
...
...
Tonja
14 days ago
But wouldn't using ON_ERROR = continue help in case of any errors during ingestion?
upvoted 0 times
...
Julieta
15 days ago
I disagree, I believe using purge = TRUE in the copy into command would be more cost-effective.
upvoted 0 times
...
Valentin
20 days ago
Option B looks good to me. 'purge = TRUE' will remove the CSV files from the stage after they've been successfully ingested, so you don't have to worry about storage costs or management.
upvoted 0 times
Christoper
2 days ago
Option B looks good to me. 'purge = TRUE' will remove the CSV files from the stage after they've been successfully ingested, so you don't have to worry about storage costs or management.
upvoted 0 times
...
...
Tonja
21 days ago
I think we should use ON_ERROR = continue in the copy into command for better performance.
upvoted 0 times
...
Thaddeus
23 days ago
I think option D is the correct answer. 'on error = SKIP_FILE' allows you to skip any files with errors during the data ingestion process, which is more performant and cost-effective than having to manually intervene or restart the entire process.
upvoted 0 times
Malika
2 days ago
Yes, it's important to minimize any interruptions during the data ingestion process.
upvoted 0 times
...
Caprice
3 days ago
I think so too, skipping files with errors will definitely help with performance and cost.
upvoted 0 times
...
Margo
11 days ago
I agree, option D seems like the best choice for this scenario.
upvoted 0 times
...
...

Save Cancel