Which option best BEST represents a combination of quantitative and qualitative metrics that can be used to comprehensively evaluate AI transparency?
The AAISM governance framework emphasizes that AI transparency cannot be evaluated using only technical statistics; it requires a combination of quantitative and qualitative metrics. The best pairing is ethical impact assessments (qualitative) with user feedback metrics (quantitative and perception-based). Availability and accuracy metrics measure performance, not transparency. Explainability reports and bias metrics are useful but still technical and limited. Comprehensive evaluation of transparency requires consideration of ethical dimensions and stakeholder perspectives, which is achieved through ethical impact analysis and user feedback.
AAISM Study Guide -- AI Governance and Program Management (Transparency and Accountability)
ISACA AI Security Management -- Measuring Ethical AI Practices
Which of the following MOST effectively secures ongoing stakeholder support for AI initiatives?
AAISM governance guidance emphasizes that stakeholder buy-in is sustained when the measurable value of AI initiatives is clearly communicated. Value demonstrations include:
* improved efficiency
* reduced cost
* reduced risk
* business growth
Training (B) and risk optimization (C) are important but do not guarantee stakeholder support. A roadmap (D) guides planning but does not secure buy-in.
============================================
Which of the following is the GREATEST risk inherent to implementing generative AI?
The AAISM framework identifies intellectual property (IP) violations as the most significant inherent risk in deploying generative AI. These systems often rely on large-scale internet data for training, which may inadvertently contain copyrighted or proprietary material. This creates legal and reputational exposure when outputs reproduce or reference protected content. While employee training gaps, asset vulnerabilities, and ROI concerns are relevant risks, they are not inherent to generative models themselves. The greatest inherent risk tied directly to generative AI adoption is the possibility of violating intellectual property rights.
AAISM Study Guide -- AI Risk Management (Generative AI Risks and Legal Exposure)
ISACA AI Security Management -- Copyright and IP Concerns in Generative AI
How can an organization BEST protect itself from payment diversions caused by deepfake attacks impersonating management?
AAISM's risk management framework stresses that the most effective defense against deepfake-enabled fraud, such as payment diversion, is resilient payment approval processes. This includes multi-step verification, segregation of duties, and independent confirmations for high-value transactions. Employee training, policies, or limiting payment frequency may reduce exposure, but they cannot guarantee prevention. Only process-based controls enforce structural safeguards that prevent fraudulent instructions from being executed, even if a deepfake impersonation attempt is successful.
AAISM Exam Content Outline -- AI Risk Management (Fraud and Deepfake Risk)
AI Security Management Study Guide -- Transactional Resilience and Controls
An attacker crafts inputs to a large language model (LLM) to exploit output integrity controls. Which of the following types of attacks is this an example of?
According to the AAISM framework, prompt injection is the act of deliberately crafting malicious or manipulative inputs to override, bypass, or exploit the model's intended controls. In this case, the attacker is targeting the integrity of the model's outputs by exploiting weaknesses in how it interprets and processes prompts. Jailbreaking is a subtype of prompt injection specifically designed to override safety restrictions, while evasion attacks target classification boundaries in other ML contexts, and remote code execution refers to system-level exploitation outside of the AI inference context. The most accurate classification of this attack is prompt injection.
AAISM Exam Content Outline -- AI Technologies and Controls (Prompt Security and Input Manipulation)
AI Security Management Study Guide -- Threats to Output Integrity
Julie
4 days agoCarrol
11 days agoGiovanna
19 days agoKing
26 days agoRebeca
1 month agoDolores
1 month agoNovella
2 months agoRebecka
2 months agoYolando
2 months agoOlene
2 months agoLayla
3 months agoMarylyn
3 months agoElin
3 months agoMaddie
3 months agoDaniel
4 months agoTien
4 months agoLavonna
4 months agoMaynard
4 months agoTerry
5 months agoVirgina
5 months agoCarry
5 months agoStephaine
5 months agoJesus
5 months agoAbraham
5 months agoValene
6 months agoGracie
6 months agoJanine
6 months ago