New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Microsoft AZ-305 Exam - Topic 2 Question 83 Discussion

Actual exam question for Microsoft's AZ-305 exam
Question #: 83
Topic #: 2
[All AZ-305 Questions]

You have an Azure virtual machine named VM1 that runs Windows Server 2019 and contains 500 GB of data files.

You are designing a solution that will use Azure Data Factory to transform the data files, and then load the files to Azure Data Lake Storage

What should you deploy on VM1 to support the design?

Show Suggested Answer Hide Answer
Suggested Answer: A

Contribute your Thoughts:

0/2000 characters
Christene
2 months ago
Surprised that people think anything other than A is correct!
upvoted 0 times
...
Valentine
3 months ago
I thought the On-premises data gateway would work too?
upvoted 0 times
...
Ona
3 months ago
Azure Pipelines agent isn't relevant here, just saying.
upvoted 0 times
...
Lajuana
3 months ago
Totally agree, A is the right choice!
upvoted 0 times
...
Werner
3 months ago
You need the self-hosted integration runtime for this.
upvoted 0 times
...
Lino
4 months ago
I practiced a similar question, and I believe the self-hosted integration runtime is definitely the right choice here.
upvoted 0 times
...
Vallie
4 months ago
The On-premises data gateway sounds familiar, but I can't recall if it's specifically for Data Factory or something else.
upvoted 0 times
...
Maryrose
4 months ago
I'm not entirely sure, but I remember something about the Azure Pipelines agent being used for CI/CD, not for data transfer.
upvoted 0 times
...
Skye
4 months ago
I think we need the self-hosted integration runtime for Azure Data Factory to connect to the on-premises data.
upvoted 0 times
...
Vallie
4 months ago
I'm a little confused by the options here. The Azure Pipelines agent and On-premises data gateway both sound like they could be relevant, but I'm not sure how they differ from the self-hosted integration runtime. I'll need to research the specific use cases for each of these components to make the right choice.
upvoted 0 times
...
Jacquline
5 months ago
Okay, I've got this. The self-hosted integration runtime is what we need to deploy on the on-premises VM to enable the data transformation and loading process. It acts as a bridge between the local environment and the Azure Data Factory service. Simple enough!
upvoted 0 times
...
Adolph
5 months ago
Hmm, I'm a bit unsure about this one. The options seem to cover different Azure services and agents, but I'm not entirely clear on how they relate to the specific scenario described in the question. I'll need to review the differences between these components to determine the best fit.
upvoted 0 times
...
Curtis
5 months ago
This looks like a straightforward question about setting up the right components to enable data transformation and loading from an on-premises VM to Azure Data Lake Storage. I think the self-hosted integration runtime is the key piece we need here.
upvoted 0 times
...
Alfred
6 months ago
Haha, this question is a real brain-teaser. I bet the developers who wrote it were chuckling to themselves the whole time.
upvoted 0 times
...
Tonette
6 months ago
What, no love for the Azure File Sync agent? I bet that could help us get the data over to Azure Data Lake Storage without too much hassle.
upvoted 0 times
Ena
5 months ago
User2: Yeah, that would be the best option to support the design.
upvoted 0 times
...
Angelo
5 months ago
User1: I think we should deploy the self-hosted integration runtime on VM1.
upvoted 0 times
...
...
Fannie
7 months ago
I don't know, the Azure Pipelines agent seems like overkill. We're just trying to transform and load some data, not build a full CI/CD pipeline.
upvoted 0 times
Carey
5 months ago
B) the Azure Pipelines agent
upvoted 0 times
...
Delisa
5 months ago
A) the self-hosted integration runtime
upvoted 0 times
...
...
Jacqueline
7 months ago
I believe the self-hosted integration runtime is the best choice because it allows for data transformation on VM1.
upvoted 0 times
...
Quentin
7 months ago
I'm not sure, but I think the On-premises data gateway could also be a good option for this design.
upvoted 0 times
...
Linn
7 months ago
C'mon, the On-premises data gateway is the obvious choice here. It's designed specifically for this kind of hybrid scenario.
upvoted 0 times
Corinne
5 months ago
C) the On-premises data gateway
upvoted 0 times
...
Valentin
5 months ago
B) the Azure Pipelines agent
upvoted 0 times
...
Rima
7 months ago
A) the self-hosted integration runtime
upvoted 0 times
...
...
Sunny
7 months ago
I agree with Katy, the self-hosted integration runtime is needed for data transformation.
upvoted 0 times
...
Katy
8 months ago
I think we should deploy the self-hosted integration runtime on VM1.
upvoted 0 times
...
Julie
8 months ago
Hmm, I think the self-hosted integration runtime is the way to go. It can connect the on-premises data to Azure Data Factory, which is perfect for this use case.
upvoted 0 times
Nelida
7 months ago
I agree, the self-hosted integration runtime is the best option for this scenario.
upvoted 0 times
...
Samira
7 months ago
A) the self-hosted integration runtime
upvoted 0 times
...
...

Save Cancel