An IJCAI 2026 workshop.
Artificial intelligence models are increasingly developed for use in clinical contexts, including diagnosis, risk prediction, treatment planning, and patient interaction. Alongside their potential benefits, such systems raise substantial ethical challenges, particularly with respect to fairness, bias, accountability, transparency, privacy, and the impact of AI-supported decisions on patients and clinical workflows.
In clinical settings, ethical concerns are tightly intertwined with the robustness and resilience of AI systems to data shifts, noise, adversarial conditions, and deployment-time uncertainty. Failures in algorithmic resilience can disproportionately affect vulnerable patient groups and undermine trust, accountability, and patient safety. At the same time, the increasing use of synthetic data to address data scarcity, privacy constraints, and representation gaps introduces new ethical questions regarding bias propagation, fidelity, and downstream clinical validity.
ETHICAIA aims to provide a focused forum for AI researchers to engage deeply with ethical issues in clinical AI applications, across AI paradigms and methodological approaches. Rather than treating ethics as an add-on or a post-hoc discussion, the workshop emphasizes ethically informed AI research, where questions of fairness, responsibility, and societal impact are integral to model design, evaluation, and intended use.
The workshop is explicitly not limited to a specific AI subfield (such as NLP) and welcomes contributions spanning machine learning, multimodal models, computer vision, decision-making systems, and foundation models in healthcare. By creating a dedicated space for this discussion, ETHICAIA complements existing IJCAI tracks and encourages interaction among researchers who may otherwise encounter ethical questions only at the margins of their work.
Topics of interest include (but are not limited to):
– Fairness and bias in clinical AI systems
– Ethical challenges in AI-based clinical decision support
– Robustness and resilience of clinical AI systems, e.g., robustness to data distribution shifts, missing or corrupted data, adversarial inputs, and real-world deployment conditions
– Representation gaps, data imbalance, and under-served patient populations
– Bias and uncertainty in multimodal and foundation models for healthcare
– Accountability, explainability, and contestability in clinical AI
– Ethical evaluation beyond accuracy: metrics, benchmarks, and validation practices
– Human–AI interaction and shared decision-making in medicine
– Privacy, consent, and secondary use of clinical data
– Ethical implications of synthetic data generation in healthcare, e.g., fairness, representational fidelity, privacy guarantees, bias amplification, and evaluation of models trained on synthetic data
– Stress testing, auditing, and failure analysis of clinical AI models
– Regulatory, legal, and governance challenges for clinical AI
– Case studies of real-world clinical AI deployments and failures
– Methodological frameworks for ethical-by-design clinical AI