New Year Sale 2026! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Databricks Certified Generative AI Engineer Associate Exam - Topic 2 Question 12 Discussion

Actual exam question for Databricks's Databricks Certified Generative AI Engineer Associate exam
Question #: 12
Topic #: 2
[All Databricks Certified Generative AI Engineer Associate Questions]

A Generative AI Engineer is developing an LLM application that users can use to generate personalized birthday poems based on their names.

Which technique would be most effective in safeguarding the application, given the potential for malicious user inputs?

Show Suggested Answer Hide Answer
Suggested Answer: D, E

In the context of developing a chatbot for a company's internal HelpDesk Call Center, the key is to select data sources that provide the most contextual and detailed information about the issues being addressed. This includes identifying the root cause and suggesting resolutions. The two most appropriate sources from the list are:

Call Detail (Option D):

Contents: This Delta table includes a snapshot of all call details updated hourly, featuring essential fields like root_cause and resolution.

Relevance: The inclusion of root_cause and resolution fields makes this source particularly valuable, as it directly contains the information necessary to understand and resolve the issues discussed in the calls. Even if some records are incomplete, the data provided is crucial for a chatbot aimed at speeding up resolution identification.

Transcript Volume (Option E):

Contents: This Unity Catalog Volume contains recordings in .wav format and text transcripts in .txt files.

Relevance: The text transcripts of call recordings can provide in-depth context that the chatbot can analyze to understand the nuances of each issue. The chatbot can use natural language processing techniques to extract themes, identify problems, and suggest resolutions based on previous similar interactions documented in the transcripts.

Why Other Options Are Less Suitable:

A (Call Cust History): While it provides insights into customer interactions with the HelpDesk, it focuses more on the usage metrics rather than the content of the calls or the issues discussed.

B (Maintenance Schedule): This data is useful for understanding when services may not be available but does not contribute directly to resolving user issues or identifying root causes.

C (Call Rep History): Though it offers data on call durations and start times, which could help in assessing performance, it lacks direct information on the issues being resolved.

Therefore, Call Detail and Transcript Volume are the most relevant data sources for a chatbot designed to assist with identifying and resolving issues in a HelpDesk Call Center setting, as they provide direct and contextual information related to customer issues.


Contribute your Thoughts:

0/2000 characters
Cecily
3 months ago
Reducing interaction time? That seems unnecessary.
upvoted 0 times
...
Joanna
3 months ago
Totally agree with A, we need to prioritize safety!
upvoted 0 times
...
Alease
3 months ago
Wait, can a filter really catch everything? Sounds tricky.
upvoted 0 times
...
Sabina
4 months ago
I think option C is better, it keeps the convo going.
upvoted 0 times
...
Noah
4 months ago
A safety filter is definitely the way to go!
upvoted 0 times
...
Katie
4 months ago
I vaguely recall something about compute power in relation to processing speed, but I don’t see how option D helps with safeguarding against malicious inputs.
upvoted 0 times
...
Dorian
4 months ago
I think option C is a bit risky. If the LLM acknowledges malicious input but continues, it could confuse users about what’s acceptable.
upvoted 0 times
...
Wynell
4 months ago
I'm not entirely sure, but I feel like reducing interaction time, like in option B, might not really address the core issue of malicious inputs.
upvoted 0 times
...
Miriam
5 months ago
I remember we discussed the importance of safety filters in our last class. I think option A makes the most sense for preventing harmful inputs.
upvoted 0 times
...
Josue
5 months ago
I'm a bit confused on this one. Asking the LLM to remind the user that the input is malicious, but continuing the conversation, doesn't seem like the best approach to me. I'll have to re-read the question and options carefully.
upvoted 0 times
...
Lakeesha
5 months ago
The key here is to prevent the LLM from generating any harmful content, even if the user input is malicious. I think option A is the way to go - a safety filter is the most robust solution.
upvoted 0 times
...
Cora
5 months ago
Hmm, I'm not sure. Reducing the user interaction time could also work, but that might not be the best user experience. I'll have to think this through carefully.
upvoted 0 times
...
Elvis
5 months ago
This seems like a straightforward question about safeguarding an LLM application. I think implementing a safety filter that detects harmful inputs and prevents the LLM from responding to them is the most effective approach.
upvoted 0 times
...
Alecia
5 months ago
Asking the LLM to remind the user that the input is malicious but continuing the conversation doesn't seem like the best idea. That could still lead to some risky outputs. I think the safety filter is the most robust solution.
upvoted 0 times
...
Marva
5 months ago
The safety filter sounds like the way to go. We can't risk the LLM generating harmful content, even if the user is being malicious. Protecting our users should be the top priority here.
upvoted 0 times
...
Elinore
5 months ago
Hmm, I'm a bit unsure about this one. Reducing the user interaction time or increasing the compute power might also help, but I'm not convinced those would be as effective as a safety filter. I'll have to think this through carefully.
upvoted 0 times
...
Elmira
5 months ago
I think implementing a safety filter that detects harmful inputs and asks the LLM to respond that it's unable to assist is the most effective approach. We need to prioritize user safety and prevent the LLM from generating any potentially malicious content.
upvoted 0 times
...
Corinne
9 months ago
I bet the person who came up with option C was trying to multitask while watching cat videos. Option A is the only way to keep this birthday poem generator from becoming a horror show.
upvoted 0 times
Georgiana
8 months ago
We need to prioritize security and ensure that the LLM is protected from any potential harm.
upvoted 0 times
...
Lashaunda
8 months ago
Implementing a safety filter is crucial for maintaining the integrity of the personalized birthday poem generator.
upvoted 0 times
...
Sherell
8 months ago
I agree, a safety filter is necessary to protect the application and its users.
upvoted 0 times
...
Julianna
8 months ago
Option A is definitely the way to go. We can't risk letting malicious inputs slip through.
upvoted 0 times
...
...
Gerardo
10 months ago
Increasing compute power? Psh, that's like trying to outrun a speeding bullet. Option A is the only sensible choice here.
upvoted 0 times
Tuyet
8 months ago
Charisse: It's better to be safe than sorry when it comes to user-generated content.
upvoted 0 times
...
Charisse
8 months ago
User 2: Definitely, we can't rely on just increasing compute power to handle malicious inputs.
upvoted 0 times
...
Theron
9 months ago
User 1: I agree, implementing a safety filter is the best way to protect the application.
upvoted 0 times
...
...
Izetta
10 months ago
Asking the LLM to just 'remind' the user? Yeah, right. That's like asking a lion to gently ask the gazelle to leave. Option A is the way to go.
upvoted 0 times
Elina
8 months ago
User 3: Implementing a safety filter is definitely the best way to protect the application.
upvoted 0 times
...
Meaghan
9 months ago
User 2: I agree, we can't rely on the LLM to handle malicious inputs on its own.
upvoted 0 times
...
Ricki
9 months ago
User 1: Option A seems like the safest choice.
upvoted 0 times
...
...
Rene
10 months ago
Reducing the interaction time? That's like trying to put a bandaid on a bullet wound. Option A is the clear winner here.
upvoted 0 times
Izetta
8 months ago
It's important to prioritize security when developing AI applications like this. Option A is the way to go.
upvoted 0 times
...
Ty
9 months ago
I agree, implementing a safety filter is crucial to safeguard the application from potential harm.
upvoted 0 times
...
Avery
10 months ago
Option A is definitely the best choice. We need to have a safety filter in place to protect against malicious inputs.
upvoted 0 times
...
...
Mattie
10 months ago
Definitely go with option A. Implementing a safety filter is the most effective way to protect against malicious inputs. Safety should be the top priority when dealing with LLMs.
upvoted 0 times
Elsa
9 months ago
User4: It's important to prioritize security when dealing with user inputs.
upvoted 0 times
...
Dorthy
9 months ago
User3: Implementing a safety filter is a proactive approach to prevent any potential harm.
upvoted 0 times
...
Lisbeth
9 months ago
User2: Yeah, safety should always come first when developing AI applications.
upvoted 0 times
...
Blythe
9 months ago
User1: I agree, option A seems like the best choice to safeguard the application.
upvoted 0 times
...
Edelmira
9 months ago
User 2: Yes, safety should always come first when dealing with potential malicious inputs.
upvoted 0 times
...
Chaya
10 months ago
User 1: I agree, option A is the best choice to safeguard the application.
upvoted 0 times
...
...
Corrinne
10 months ago
I see both points, but I think option A is more practical. We should prioritize safety over continuing the conversation.
upvoted 0 times
...
Adelina
11 months ago
I disagree, I think option C is better. We can still continue the conversation with the user while reminding them their input is malicious.
upvoted 0 times
...
Cherilyn
11 months ago
I think option A would be the most effective. We need to protect the application from harmful inputs.
upvoted 0 times
...

Save Cancel