Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Snowflake Exam DEA-C01 Topic 4 Question 46 Discussion

Actual exam question for Snowflake's DEA-C01 exam
Question #: 46
Topic #: 4
[All DEA-C01 Questions]

A CSV file around 1 TB in size is generated daily on an on-premise server A corresponding table. Internal stage, and file format have already been created in Snowflake to facilitate the data loading process

How can the process of bringing the CSV file into Snowflake be automated using the LEAST amount of operational overhead?

Show Suggested Answer Hide Answer
Suggested Answer: C

This option is the best way to automate the process of bringing the CSV file into Snowflake with the least amount of operational overhead. SnowSQL is a command-line tool that can be used to execute SQL statements and scripts on Snowflake. By scheduling a SQL file that executes a PUT command, the CSV file can be pushed from the on-premise server to the internal stage in Snowflake. Then, by creating a pipe that runs a COPY INTO statement that references the internal stage, Snowpipe can automatically load the file from the internal stage into the table when it detects a new file in the stage. This way, there is no need to manually start or monitor a virtual warehouse or task.


Contribute your Thoughts:

Erick
20 days ago
Haha, did someone say 1 TB CSV file? That's a whole lot of data! I hope they have a good internet connection on that on-premise server.
upvoted 0 times
...
Lindsey
23 days ago
Hmm, I'm not so sure. What if the file is too big for Snowpipe to handle? Maybe option D using Snowpark Python would be better.
upvoted 0 times
...
Glennis
1 months ago
I agree, C is the best option. Automating the process with Snowpipe is the way to go. No need to manually run tasks or scripts.
upvoted 0 times
Weldon
16 days ago
C) On the on-premise server schedule a SQL file to run using SnowSQL that executes a PUT to push a specific file to the internal stage. Create a pipe that runs a copy into statement that references the internal stage Snowpipe auto-ingest will automatically load the file from the internal stage when the new file lands in the internal stage.
upvoted 0 times
...
...
Vashti
1 months ago
I see your points, but I personally prefer option D. Using a Python script to directly load the data into the table without the need for an internal stage sounds like a more flexible approach.
upvoted 0 times
...
Terrilyn
2 months ago
I disagree, I believe option C is more efficient. Using Snowpipe auto-ingest to automatically load the file seems like a time-saving solution.
upvoted 0 times
...
Tequila
2 months ago
The correct answer is C. Snowpipe will automatically load the file from the internal stage when the new file lands, which is the least amount of operational overhead.
upvoted 0 times
Sophia
4 days ago
I think the answer is A. It involves creating a task in Snowflake that runs a copy into statement once a day.
upvoted 0 times
...
Polly
5 days ago
Let's go with Snowpipe then, it seems like the most straightforward solution.
upvoted 0 times
...
Chaya
12 days ago
I agree, Snowpipe will save us a lot of operational overhead.
upvoted 0 times
...
Latrice
15 days ago
Snowpipe sounds like the most efficient option for loading the CSV file into Snowflake.
upvoted 0 times
...
Roselle
26 days ago
I think the best way to automate the process is by using Snowpipe.
upvoted 0 times
...
...
Leatha
2 months ago
I think option A is the best choice. It seems like the most straightforward way to automate the process with minimal overhead.
upvoted 0 times
...

Save Cancel