DHC WEB SESSIONS

Strategy & Concept


Do AI use cases already exist in your organization?


Has the AI use case been assessed with regard to patient safety, product quality, and data integrity?


Is there a structured process for the evaluation and testing of AI ideas



Governance & Organization


Are clear responsibilities defined for development, operation, and monitoring of the AI system?


Is the use of AI organizationally and procedurally embedded in existing IT, quality, and validation structures?


Is it defined which level of autonomy the AI system has and where human review is mandatory?



Data & Model Quality


Are the origin, purpose, and quality of the data used for AI traceable?


Are data‑ or model‑related risks (e.g. bias, drift) consciously addressed?


Are AI results and decisions critically challenged and validated by subject matter experts (critical thinking)?



Suppliers & Transparency


Is responsibility between suppliers and operators clearly defined and taken into account in operations?


Is there sufficient transparency regarding changes to models, data, or AI functions made by suppliers?



Lifecycle & Operation


Is the AI system considered across its entire lifecycle (not only until go‑live)?


Are performance indicators and monitoring criteria defined for ongoing operation?


Are there clear rules defining when adjustments, re‑training, or re‑validation are required?


Is the IT infrastructure suitable for AI operation (e.g. data storage, model versioning, deployment)?



Regulatory & Competence


Are transparency, traceability, and fairness of AI systematically considered?


Has the relevance of regulatory requirements (e.g. EU AI Act, GDPR, GxP) for the use of AI been assessed?