Financial supervisors and regulators are increasingly deploying AI to enhance oversight, automate stress tests, and improve model validation. Over five days, this intensive program examines AI for generating synthetic stress scenarios, monitoring systemic anomalies, strengthening compliance workflows, and establishing ethical control frameworks. Participants will work with real-world case studies from the BIS, FSB, and Basel frameworks—gaining hands-on experience in building scalable, explainable, and future-ready supervisory AI systems.
- Synthetic data generation for forward-looking stress-testing
- AI-based risk scoring and systemic oversight (SupTech use cases)
- Unsupervised ML for stress test model validation and drift detection
- NLP for regulatory reporting, risk event detection, and compliance
- Agent-based AI simulations for contagion & resilience testing
- AI governance frameworks, model auditability & ethical transparency
- AI-empowered liquidity and capital stress test tooling aligned with Basel III & CCAR
- Integration of MLOps in regulatory AI implementation
- Sustainable AI considerations: energy efficiency in stress testing
- Generate synthetic stress scenarios with AI-driven modeling
- Use unsupervised ML to validate and monitor supervisory models
- Implement NLP pipelines for regulatory risk reporting
- Design agent-based simulations for bank contagion analysis
- Integrate MLOps for efficient model deployment and control
- Apply explainable AI and governance standards in regulatory context
- Align AI stress-testing with Basel III / CCAR frameworks
- Factor in sustainability and energy usage in AI strategies
- Central bank supervisors and prudential regulatory teams - Financial stability units at BIS, FSB, IMF, ESRB - RegTech/SupTech professionals building AI oversight tools - Bank model validators, risk control, and compliance officers - Consultants and auditors in financial supervision and stress testing
- Live coding labs: Python, TensorFlow, agent-based simulation - Case studies: BIS SupTech, FSB oversight tools, Basel II/III stress regimes - Hands-on synthetic data and scenario generation - MLOps governance clinic: pipeline building, audit trails - Peer reviews and expert governance panels - Capstone: deploy an AI-backed supervisory tool with dashboard
- Fundamentals of AI-generated synthetic data
- Designing diverse and realistic stress scenarios
- Lab: Build synthetic stress datasets with GAN/VAE
- Discussion: limitations, bias, and regulatory validation
- Overview of unsupervised anomaly detection
- Identifying model drift in supervisory frameworks
- Lab: Deploy unsupervised ML on stress-test outputs
- Case study: detecting drift in capital projection modeling
- NLP for extracting risk events from regulatory filings
- Automated compliance checking with text analytics
- Lab: Build reporting extraction pipelines with spaCy
- Governance: audit trails and report automation
- AI-driven contagion simulations: theory and practice
- Crafting agent behaviors for shock modeling
- Lab: Prototype contagion model with Mesa
- Basel III/CCAR alignment in scenario calibration
- MLOps infrastructure for regulatory models
- Explainable AI frameworks and ethical transparency
- Sustainable AI: minimizing carbon footprint in stress testing
- Lab: Build a supervised governance pipeline
- Capstone: Present AI stress-testing tool + executive dashboard
Group & Corporate Discounts: Available for companies enrolling multiple participants to help maximize ROI. Individual Discounts: Offered to self-sponsored participants who pay in full and upfront. Registration Process: Corporate nominations must go through the client’s HR or Training department. Self-nominations must be prepaid via the “payment by self” option. Confirmation: All registrations are subject to DIXONTECH’s approval and seat availability. Refunds: Provided in case of course cancellation or no seat availability. Tax Responsibility: Clients are responsible for any local taxes in their country.