New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Databricks Certified Generative AI Engineer Associate Exam - Topic 1 Question 10 Discussion

Actual exam question for Databricks's Databricks Certified Generative AI Engineer Associate exam
Question #: 10
Topic #: 1
[All Databricks Certified Generative AI Engineer Associate Questions]

A Generative Al Engineer is tasked with developing an application that is based on an open source large language model (LLM). They need a foundation LLM with a large context window.

Which model fits this need?

Show Suggested Answer Hide Answer
Suggested Answer: D, E

In the context of developing a chatbot for a company's internal HelpDesk Call Center, the key is to select data sources that provide the most contextual and detailed information about the issues being addressed. This includes identifying the root cause and suggesting resolutions. The two most appropriate sources from the list are:

Call Detail (Option D):

Contents: This Delta table includes a snapshot of all call details updated hourly, featuring essential fields like root_cause and resolution.

Relevance: The inclusion of root_cause and resolution fields makes this source particularly valuable, as it directly contains the information necessary to understand and resolve the issues discussed in the calls. Even if some records are incomplete, the data provided is crucial for a chatbot aimed at speeding up resolution identification.

Transcript Volume (Option E):

Contents: This Unity Catalog Volume contains recordings in .wav format and text transcripts in .txt files.

Relevance: The text transcripts of call recordings can provide in-depth context that the chatbot can analyze to understand the nuances of each issue. The chatbot can use natural language processing techniques to extract themes, identify problems, and suggest resolutions based on previous similar interactions documented in the transcripts.

Why Other Options Are Less Suitable:

A (Call Cust History): While it provides insights into customer interactions with the HelpDesk, it focuses more on the usage metrics rather than the content of the calls or the issues discussed.

B (Maintenance Schedule): This data is useful for understanding when services may not be available but does not contribute directly to resolving user issues or identifying root causes.

C (Call Rep History): Though it offers data on call durations and start times, which could help in assessing performance, it lacks direct information on the issues being resolved.

Therefore, Call Detail and Transcript Volume are the most relevant data sources for a chatbot designed to assist with identifying and resolving issues in a HelpDesk Call Center setting, as they provide direct and contextual information related to customer issues.


Contribute your Thoughts:

0/2000 characters
Solange
3 months ago
DistilBERT seems too small for this task, right?
upvoted 0 times
...
Mollie
3 months ago
Wait, DBRX? I haven't heard of that one before.
upvoted 0 times
...
Annmarie
3 months ago
MPT-30B is also a solid option, though.
upvoted 0 times
...
Colette
4 months ago
Totally agree, it's the best choice here.
upvoted 0 times
...
Annice
4 months ago
Llama2-70B has a huge context window!
upvoted 0 times
...
Felicia
4 months ago
DBRX is new to me, but I doubt it’s the right choice since I haven't seen it referenced in our materials.
upvoted 0 times
...
Lorean
4 months ago
Llama2-70B sounds familiar; I think it was mentioned in a practice question about large context windows.
upvoted 0 times
...
Vashti
4 months ago
I feel like MPT-30B might be a good option, but I can't recall its context window size compared to others.
upvoted 0 times
...
Kristel
5 months ago
I remember that DistilBERT is more about efficiency and smaller models, so I don't think it has a large context window.
upvoted 0 times
...
Alise
5 months ago
I'm feeling pretty confident about this one. The key is the requirement for a large context window, which points to either MPT-30B or Llama2-70B as the most suitable options.
upvoted 0 times
...
Cecilia
5 months ago
Based on the details provided, I'd say Llama2-70B is the best fit. It's a large language model with a sizable context window, which seems perfect for this generative AI application.
upvoted 0 times
...
Florinda
5 months ago
I'm a bit confused here. What exactly is a "large context window" and how does that relate to the model requirements? I'll need to review my notes on LLMs to figure this out.
upvoted 0 times
...
Kati
5 months ago
Okay, let's see. A large context window means the model needs to be able to handle long input sequences. I'm thinking MPT-30B or Llama2-70B might be good options.
upvoted 0 times
...
Chauncey
5 months ago
Hmm, this is a tricky one. I'll need to think carefully about the key requirements - a large context window is essential for this task.
upvoted 0 times
...
Dulce
10 months ago
Llama2-70B, really? Sounds like a model that's straight out of a sci-fi movie. I hope the Generative AI Engineer doesn't get lost in a 70 billion parameter maze!
upvoted 0 times
Daryl
9 months ago
A: Hopefully the Generative AI Engineer can navigate through that 70 billion parameter maze!
upvoted 0 times
...
Iesha
9 months ago
B: Yeah, but it does sound like something out of a sci-fi movie!
upvoted 0 times
...
Vernell
9 months ago
A: I think Llama2-70B is a great choice for a large context window.
upvoted 0 times
...
...
Aleta
10 months ago
Hmm, I'm not convinced. DBRX sounds mysterious, maybe it's a new cutting-edge model that could be a better fit. I'd have to do some research on that one.
upvoted 0 times
Annett
9 months ago
User 4: DBRX sounds interesting, I wonder if it's worth looking into.
upvoted 0 times
...
Melvin
9 months ago
User 3: Llama2-70B has a large context window, that could be a good fit.
upvoted 0 times
...
Detra
9 months ago
User 2: I'm not sure, I feel like MPT-30B might be a better option.
upvoted 0 times
...
Luisa
9 months ago
User 2: DBRX does sound intriguing, it might be worth looking into.
upvoted 0 times
...
Penney
9 months ago
User 1: I think DistilBERT could work well for that.
upvoted 0 times
...
Lashonda
10 months ago
User 1: I think DistilBERT could work well for that.
upvoted 0 times
...
...
Hyman
10 months ago
I agree with Chanel, DistilBERT is known for its performance with a large context window.
upvoted 0 times
...
Chanel
10 months ago
I disagree, I believe A) DistilBERT would be a better fit because of its efficiency.
upvoted 0 times
...
Geoffrey
10 months ago
I'm not sure about that. Isn't DistilBERT also a large language model? Maybe that would be a better fit since it's a distilled version of BERT, which is a well-known and powerful model.
upvoted 0 times
...
Lilli
11 months ago
I think option C, Llama2-70B, is the correct answer here. It's a large language model with a large context window, which is what the Generative AI Engineer needs for their application.
upvoted 0 times
...
Merilyn
11 months ago
I think the model that fits this need is C) Llama2-70B.
upvoted 0 times
...

Save Cancel