Independence Day Deal! Unlock 25% OFF Today – Limited-Time Offer - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Hitachi Vantara Exam HCE-5920 Topic 2 Question 62 Discussion

Actual exam question for Hitachi Vantara's HCE-5920 exam
Question #: 62
Topic #: 2
[All HCE-5920 Questions]

A customer needs to extract data from many afferent text He layouts with new file layouts being identified in the future and they want to insert the data into corresponding database tables. They are concerned about maintaining multiple PDI jobs and transformations given the large number of unique files.

What should you do to meet the requirements when creating transformations?

Show Suggested Answer Hide Answer
Suggested Answer: A, B

Contribute your Thoughts:

Rosendo
1 months ago
The ETL Metadata injection step (Option C) sounds like the way to go. Gotta love those fancy sounding steps!
upvoted 0 times
...
Arlene
1 months ago
Metadata structure of stream step (Option A) could work, but I'm not sure it can handle the large number of unique files efficiently.
upvoted 0 times
Precious
8 days ago
D: Using the Job Executor step for each file might also be worth considering for this scenario.
upvoted 0 times
...
Johnson
11 days ago
C: What about using the ETL Metadata injection step instead? That could be a better solution.
upvoted 0 times
...
Ciara
15 days ago
B: I agree, but it might not be the most efficient option for a large number of files.
upvoted 0 times
...
Deonna
21 days ago
A: I think using the Metadata structure of stream step for each file could help with handling the unique files.
upvoted 0 times
...
...
Octavio
1 months ago
Haha, using the Job Executor step for each file (Option D)? That's like trying to swat a fly with a sledgehammer!
upvoted 0 times
Launa
12 days ago
C) Use the ETL Metadata injection step
upvoted 0 times
...
Major
18 days ago
B) Use the Transformation Executor step for each file.
upvoted 0 times
...
Maryrose
19 days ago
A) Use the Metadata structure of stream step for each file.
upvoted 0 times
...
...
Madonna
2 months ago
Using the Transformation Executor step for each file (Option B) sounds like a lot of manual work. I'd prefer a more automated solution.
upvoted 0 times
Lezlie
2 months ago
C) Use the ETL Metadata injection step
upvoted 0 times
...
Stephanie
2 months ago
A) Use the Metadata structure of stream step for each file.
upvoted 0 times
...
...
Elenor
2 months ago
Option C looks promising, the ETL Metadata injection step seems like it could handle the dynamic file layouts.
upvoted 0 times
Bette
1 months ago
It's definitely a good solution for maintaining transformations with changing file layouts.
upvoted 0 times
...
Marshall
1 months ago
I agree, it would make it easier to manage the large number of unique files without having to create multiple PDI jobs.
upvoted 0 times
...
Berry
1 months ago
Yes, using the ETL Metadata injection step would allow for flexibility with new file layouts in the future.
upvoted 0 times
...
Rolande
2 months ago
Option C looks promising, the ETL Metadata injection step seems like it could handle the dynamic file layouts.
upvoted 0 times
...
...
Nicholle
2 months ago
But wouldn't using the Transformation Executor step for each file be more efficient?
upvoted 0 times
...
Torie
3 months ago
I disagree, I believe we should use the ETL Metadata injection step.
upvoted 0 times
...
Nicholle
3 months ago
I think we should use the Metadata structure of stream step for each file.
upvoted 0 times
...

Save Cancel