Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Databricks Exam Databricks Certified Generative AI Engineer Associate Topic 5 Question 13 Discussion

Actual exam question for Databricks's Databricks Certified Generative AI Engineer Associate exam
Question #: 13
Topic #: 5
[All Databricks Certified Generative AI Engineer Associate Questions]

A team wants to serve a code generation model as an assistant for their software developers. It should support multiple programming languages. Quality is the primary objective.

Which of the Databricks Foundation Model APIs, or models available in the Marketplace, would be the best fit?

Show Suggested Answer Hide Answer
Suggested Answer: A

When deploying an LLM application for customer service inquiries, the primary focus is on measuring the operational efficiency and quality of the responses. Here's why A is the correct metric:

Number of customer inquiries processed per unit of time: This metric tracks the throughput of the customer service system, reflecting how many customer inquiries the LLM application can handle in a given time period (e.g., per minute or hour). High throughput is crucial in customer service applications where quick response times are essential to user satisfaction and business efficiency.

Real-time performance monitoring: Monitoring the number of queries processed is an important part of ensuring that the model is performing well under load, especially during peak traffic times. It also helps ensure the system scales properly to meet demand.

Why other options are not ideal:

B . Energy usage per query: While energy efficiency is a consideration, it is not the primary concern for a customer-facing application where user experience (i.e., fast and accurate responses) is critical.

C . Final perplexity scores for the training of the model: Perplexity is a metric for model training, but it doesn't reflect the real-time operational performance of an LLM in production.

D . HuggingFace Leaderboard values for the base LLM: The HuggingFace Leaderboard is more relevant during model selection and benchmarking. However, it is not a direct measure of the model's performance in a specific customer service application in production.

Focusing on throughput (inquiries processed per unit time) ensures that the LLM application is meeting business needs for fast and efficient customer service responses.


Contribute your Thoughts:

Nan
3 months ago
I'm just happy I don't have to come up with the model name. Sounds like something out of a sci-fi novel!
upvoted 0 times
Alfred
1 months ago
D) CodeLlama-34B
upvoted 0 times
...
Kayleigh
1 months ago
C) MPT-7b
upvoted 0 times
...
Micaela
2 months ago
B) BGE-large
upvoted 0 times
...
Cyndy
3 months ago
A) Llama2-70b
upvoted 0 times
...
...
Jutta
3 months ago
BGE-large? Seriously? That's for language modeling, not code generation. This is a no-brainer, folks. CodeLlama-34B is the way to go.
upvoted 0 times
...
Paul
3 months ago
I'm not sure about CodeLlama-34B. It's a bit of an unknown, and I'd prefer a more well-established model like Llama2-70b or MPT-7b. Quality is the priority, so I'd go with one of those.
upvoted 0 times
...
Marcelle
3 months ago
I think CodeLlama-34B sounds like the best fit. It's designed specifically for code generation and supports multiple languages, which is exactly what the team needs.
upvoted 0 times
Jeff
1 months ago
Yes, CodeLlama-34B appears to be the ideal fit for the team. It meets the criteria of supporting multiple programming languages and focusing on code generation.
upvoted 0 times
...
Gerald
2 months ago
CodeLlama-34B definitely stands out as the best choice for the team. It aligns perfectly with their needs for a code generation model.
upvoted 0 times
...
Henriette
2 months ago
I think CodeLlama-34B is the way to go as well. It's tailored for code generation and supports multiple programming languages.
upvoted 0 times
...
Natalie
3 months ago
I agree, CodeLlama-34B seems like the most suitable option for the team's requirements.
upvoted 0 times
...
...
Anglea
4 months ago
I'm not sure, I think BGE-large could also be a good fit for supporting multiple programming languages.
upvoted 0 times
...
An
4 months ago
I agree with Dyan, quality is the primary objective so CodeLlama-34B seems like the right choice.
upvoted 0 times
...
Dyan
4 months ago
I think CodeLlama-34B would be the best fit because it focuses on quality.
upvoted 0 times
...

Save Cancel