Cloud Kicks has implemented an Employee Agent to answer benefits questions for its employees. How should a Platform Administrator prevent the agent from responding to staff members' questions about the CEO's private health plan and benefits?
In the context of Agentforce AI, grounding and data security are paramount. Salesforce AI agents, including Employee Agents, respect the existing security model of the Salesforce organization1. This means that the most effective way to prevent an agent from accessing or disclosing sensitive information, such as a CEO's private health plan, is to leverage Field-Level Security (FLS) and user permissions2. When an agent 'grounds' its response, it only considers data that the running user (or the agent's service user) has the permission to view3. If the CEO's health records are stored in fields or records that are restricted via FLS or Sharing Settings from the profiles or permission sets used by the agent's context, the agent will simply not 'see' that data during its retrieval phase4. While modifying instructions and guardrails (Option C) provides an additional layer of safety, it is not as foolproof as the underlying security architecture5. Training the agent (Option D) is not a standard configuration step for preventing specific record access in a production environment6. Therefore, maintaining a robust security model is the critical prerequisite for ensuring that AI agents provide accurate and safe responses without leaking confidential business information.
Detra
7 days agoGlen
13 days agoAlba
18 days agoMarge
23 days agoPrecious
28 days agoMirta
1 month agoElden
1 month agoElin
1 month agoLaurena
2 months agoAdelaide
2 months agoGlendora
2 months agoVicente
2 months agoBerry
2 months agoMarvel
2 months ago