You ask Microsoft 365 Copilot to create a report based on information from the web. You verify the response and discover that some information is fictional.
What is this an example of?
This scenario is an example of fabrication, which is commonly referred to in generative AI contexts as a hallucination. Fabrication occurs when an AI system generates information that appears credible but is factually incorrect, invented, or unsupported by verifiable sources.
According to Microsoft AI Business Professional guidance, large language models predict text based on patterns learned during training. They do not ''know'' facts in the human sense. As a result, when asked to generate reports using web-based information, the model may produce plausible-sounding but fictional details if sufficient grounding or reliable sources are not provided.
Deepfake refers specifically to synthetic media such as manipulated images, audio, or video. Overreliance describes a human behavior risk where users trust AI outputs without verification. Prompt injection is a malicious technique designed to manipulate model behavior. Bias refers to systematic unfairness in outputs.
In this case, the presence of fictional information in the generated report directly aligns with fabrication, making option B the correct answer.
Currently there are no comments in this discussion, be the first to comment!