Deal of The Day! Hurry Up, Grab the Special Discount - Save 25% - Ends In 00:00:00 Coupon code: SAVE25
Welcome to Pass4Success

- Free Preparation Discussions

Google Generative AI Leader Exam - Topic 4 Question 13 Discussion

Actual exam question for Google's Generative AI Leader exam
Question #: 13
Topic #: 4
[All Generative AI Leader Questions]

A global news company is using a large language model to automatically generate summaries of news articles for their website. The model's summary of an international summit was accurate until it hallucinated by stating a detail that did not occur. How should the company overcome this hallucination?

Show Suggested Answer Hide Answer
Suggested Answer: D

The core problem is the model's hallucination---it invented a factual detail---in a context (news reporting) where factual accuracy is non-negotiable. To correct a factual error in a generative summary, the model must be constrained to speak only based on verifiable facts from a reliable source.

The most effective technique to combat hallucinations and ensure factual adherence is Grounding (D). Grounding connects the Large Language Model's (LLM's) output to a specific, trusted, and verifiable source of information. This is often implemented using Retrieval-Augmented Generation (RAG). In this scenario, grounding the summary model on the original source articles ensures that every generated statement is directly entailed by the provided facts (the source article content).

Option B, fine-tuning, is expensive and only updates the model's general knowledge and style; it does not prevent the model from guessing or fabricating details when retrieving information. Option C, increasing temperature, would make the output less consistent and more diverse, likely increasing the chance of hallucination, which is the opposite of the desired effect. Option A is unrelated to factual accuracy. Therefore, Grounding is the necessary step to anchor the model's responses to the true content of the source articles.

(Reference: Google Cloud documentation on RAG/Grounding emphasizes that its primary purpose is to address the ''knowledge cutoff'' and hallucination issues of LLMs by retrieving relevant, up-to-date information from external knowledge sources and using this retrieved information to ground the LLM's generation, ensuring factual accuracy.)

===========


Contribute your Thoughts:

0/2000 characters

Currently there are no comments in this discussion, be the first to comment!


Save Cancel