Clinical deployment of AI in medicine raises practical and ethical questions about who is accountable when decisions affect patient care. Evidence from policy bodies and researchers shows that accountability is diffuse, involving vendors, clinicians, health systems, and regulators. The World Health Organization recommends governance frameworks that assign responsibility across these actors to protect patient safety and equity. Research by Ziad Obermeyer University of California, Berkeley exposed how a widely used commercial health algorithm produced biased outcomes by using cost-based proxies, illustrating how design choices can shift harms onto marginalized patients. This demonstrates that accountability cannot rest solely on frontline clinicians when system design and data selection drive errors.
Responsibility and oversight
Regulatory agencies such as the U.S. Food and Drug Administration set standards for safety and validation of clinical algorithms but operate within national legal limits and resources. Professional bodies including the American Medical Association emphasize that clinicians retain duty of care when using decision-support tools, even if tools are labeled as advisory. Ethics scholars like Luciano Floridi University of Oxford argue that moral responsibility in AI should be shared and traceable, combining technical transparency with institutional policies. Achieving meaningful transparency requires documentation of training data, performance across population subgroups, and mechanisms for redress when harm occurs.
Causes and consequences
Causes of accountability gaps include algorithmic opacity, proprietary commercial incentives, uneven data quality, and regulatory lag. When AI systems are opaque or tested on non-representative data, clinical decisions can perpetuate existing disparities and erode trust in health systems. Consequences include misdiagnosis, unequal access to care, and legal disputes that strain clinician–patient relationships. Cultural and territorial differences matter because social determinants of health, data governance norms, and legal liability vary between countries, making a one-size-fits-all accountability model unsuitable. In low-resource settings, algorithmic recommendations trained on high-income populations can be particularly harmful.
Closing these gaps requires multi-level action: enforceable regulation, vendor transparency, clinician training, and patient-centered governance that recognizes community values. Combining technical audits, institutional oversight, and clear legal responsibility can make AI-driven clinical decisions ethically accountable to both patients and providers while acknowledging the complex social contexts in which medicine is practiced.