Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Databricks Exam Databricks-Generative-AI-Engineer-Associate Topic 4 Question 9 Discussion

Actual exam question for Databricks's Databricks-Generative-AI-Engineer-Associate exam
Question #: 9
Topic #: 4
[All Databricks-Generative-AI-Engineer-Associate Questions]

A Generative Al Engineer is creating an LLM-based application. The documents for its retriever have been chunked to a maximum of 512 tokens each. The Generative Al Engineer knows that cost and latency are more important than quality for this application. They have several context length levels to choose from.

Which will fulfill their need?

Show Suggested Answer Hide Answer
Suggested Answer: A

When deploying an LLM application for customer service inquiries, the primary focus is on measuring the operational efficiency and quality of the responses. Here's why A is the correct metric:

Number of customer inquiries processed per unit of time: This metric tracks the throughput of the customer service system, reflecting how many customer inquiries the LLM application can handle in a given time period (e.g., per minute or hour). High throughput is crucial in customer service applications where quick response times are essential to user satisfaction and business efficiency.

Real-time performance monitoring: Monitoring the number of queries processed is an important part of ensuring that the model is performing well under load, especially during peak traffic times. It also helps ensure the system scales properly to meet demand.

Why other options are not ideal:

B . Energy usage per query: While energy efficiency is a consideration, it is not the primary concern for a customer-facing application where user experience (i.e., fast and accurate responses) is critical.

C . Final perplexity scores for the training of the model: Perplexity is a metric for model training, but it doesn't reflect the real-time operational performance of an LLM in production.

D . HuggingFace Leaderboard values for the base LLM: The HuggingFace Leaderboard is more relevant during model selection and benchmarking. However, it is not a direct measure of the model's performance in a specific customer service application in production.

Focusing on throughput (inquiries processed per unit time) ensures that the LLM application is meeting business needs for fast and efficient customer service responses.


Contribute your Thoughts:

Hortencia
27 days ago
Haha, I love how the options just keep getting more and more absurd. 14GB for a model? They must be running this on a supercomputer!
upvoted 0 times
...
Junita
1 months ago
32,768 tokens? Are they trying to build Skynet or something? I think they need to dial it back a bit and focus on the practical needs.
upvoted 0 times
Amie
1 days ago
C: I think option D with context length 512 would be more practical.
upvoted 0 times
...
Jesus
26 days ago
B: Yeah, they should focus on cost and latency.
upvoted 0 times
...
Dacia
29 days ago
A: I agree, 32,768 tokens seems excessive.
upvoted 0 times
...
...
Robt
1 months ago
Hmm, 514 tokens might work, but that extra cost and size is probably not worth it. I'd go with the 384 embedding dimension option.
upvoted 0 times
Shawna
1 days ago
I think the 384 embedding dimension with 512 tokens is the way to go for this application.
upvoted 0 times
...
Cassie
4 days ago
Yeah, the extra cost and size for the 514 tokens might not be worth it.
upvoted 0 times
...
Jeff
27 days ago
I agree, the 384 embedding dimension option seems like the best choice.
upvoted 0 times
...
...
Stephaine
2 months ago
Wow, 512 tokens per chunk? That's really compact. I guess they're going for speed and efficiency, not the highest quality.
upvoted 0 times
Mattie
29 days ago
B: Yeah, I agree. The smaller model size and lower embedding dimension would definitely help with cost and latency.
upvoted 0 times
...
Ashanti
1 months ago
A: I think option D with context length 512 would be the best choice for speed and efficiency.
upvoted 0 times
...
...
Cherry
2 months ago
But the Generative AI Engineer mentioned that cost and latency are more important than quality, so a smaller model like option D might be more efficient.
upvoted 0 times
...
Maile
2 months ago
I disagree, I believe option A with context length 514 is more suitable for this application.
upvoted 0 times
...
Cherry
2 months ago
I think option D with context length 512 would be the best choice.
upvoted 0 times
...

Save Cancel