Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Snowflake Exam ARA-C01 Topic 3 Question 48 Discussion

Actual exam question for Snowflake's ARA-C01 exam
Question #: 48
Topic #: 3
[All ARA-C01 Questions]

A company is trying to Ingest 10 TB of CSV data into a Snowflake table using Snowpipe as part of Its migration from a legacy database platform. The records need to be ingested in the MOST performant and cost-effective way.

How can these requirements be met?

Show Suggested Answer Hide Answer
Suggested Answer: D

For ingesting a large volume of CSV data into Snowflake using Snowpipe, especially for a substantial amount like 10 TB, the on error = SKIP_FILE option in the COPY INTO command can be highly effective. This approach allows Snowpipe to skip over files that cause errors during the ingestion process, thereby not halting or significantly slowing down the overall data load. It helps in maintaining performance and cost-effectiveness by avoiding the reprocessing of problematic files and continuing with the ingestion of other data.


Contribute your Thoughts:

Larae
2 months ago
Ah, the age-old debate: to continue or to skip? I say, why not both? Use 'on error = SKIP_FILE' and then go out for a nice, relaxing purge. Ah, the life of a data engineer.
upvoted 0 times
Lenna
24 hours ago
Agreed, we can always purge later if needed.
upvoted 0 times
...
Sunshine
5 days ago
Let's go with 'on error = SKIP_FILE' for now.
upvoted 0 times
...
Jessenia
8 days ago
So, combining 'on error = SKIP_FILE' and 'purge = TRUE' could be the best approach for this data ingestion process.
upvoted 0 times
...
Ling
13 days ago
That's true, 'purge = TRUE' can help with performance by removing files after they are successfully loaded.
upvoted 0 times
...
Leslee
28 days ago
But what about using 'purge = TRUE' in the copy into command? Wouldn't that help with performance?
upvoted 0 times
...
Shayne
29 days ago
I agree, using 'on error = SKIP_FILE' is a good way to handle errors during ingestion.
upvoted 0 times
...
...
Vernell
2 months ago
I think using on error = SKIP_FILE would be the best option to skip files with errors and continue the ingestion process smoothly.
upvoted 0 times
...
Erick
2 months ago
Hmm, I'm not sure about these options. 'FURGE = FALSE'? Is that even a real Snowflake command? I think I'll go with option D, just to be safe.
upvoted 0 times
Geraldine
1 months ago
Yeah, I think option D is the way to go. Let's go with that.
upvoted 0 times
...
Jacquelyne
1 months ago
I agree, option C sounds suspicious. Option D seems like the safest choice.
upvoted 0 times
...
Leonor
2 months ago
Option C is definitely not a real Snowflake command. I would go with option D as well.
upvoted 0 times
...
...
Tonja
2 months ago
But wouldn't using ON_ERROR = continue help in case of any errors during ingestion?
upvoted 0 times
...
Julieta
2 months ago
I disagree, I believe using purge = TRUE in the copy into command would be more cost-effective.
upvoted 0 times
...
Valentin
2 months ago
Option B looks good to me. 'purge = TRUE' will remove the CSV files from the stage after they've been successfully ingested, so you don't have to worry about storage costs or management.
upvoted 0 times
Elise
1 months ago
Yes, 'purge = TRUE' is definitely the way to go for a performant and cost-effective data ingestion process.
upvoted 0 times
...
Cordie
2 months ago
I agree, using 'purge = TRUE' is the most cost-effective way to handle the ingestion of the 10 TB of CSV data into Snowflake.
upvoted 0 times
...
Christoper
2 months ago
Option B looks good to me. 'purge = TRUE' will remove the CSV files from the stage after they've been successfully ingested, so you don't have to worry about storage costs or management.
upvoted 0 times
...
...
Tonja
2 months ago
I think we should use ON_ERROR = continue in the copy into command for better performance.
upvoted 0 times
...
Thaddeus
2 months ago
I think option D is the correct answer. 'on error = SKIP_FILE' allows you to skip any files with errors during the data ingestion process, which is more performant and cost-effective than having to manually intervene or restart the entire process.
upvoted 0 times
Malika
2 months ago
Yes, it's important to minimize any interruptions during the data ingestion process.
upvoted 0 times
...
Caprice
2 months ago
I think so too, skipping files with errors will definitely help with performance and cost.
upvoted 0 times
...
Margo
2 months ago
I agree, option D seems like the best choice for this scenario.
upvoted 0 times
...
...

Save Cancel