Deep model adoption in credit and risk raises concerns about transparency and compliance. This course empowers practitioners to implement explainable AI (XAI) solutions—leveraging SHAP, LIME, counterfactuals, and rule-extraction—to justify model decisions in underwriting and risk scoring. Delegates will explore best practices, regulatory trends (e.g., US ECOA, EU AI Act), and creative explanations tailored for credit teams and regulators.
- Introduction to XAI in credit and risk modeling
- SHAP, LIME, and counterfactual explanations
- Rule-extraction and surrogate models for transparency
- Interpretable scorecards vs black-box models
- Fairness metrics and bias mitigation in explainable workflows
- AI compliance and audit trails under ECOA, Basel, and AI Act
- Visual and narrative explanation strategies for stakeholders
- Integrating XAI with loan origination systems
- Alerting and monitoring for changing risk patterns
- Governance frameworks for model approval and retraining
- Explain model outputs using SHAP, LIME, and counterfactuals
- Compare scorecard-based models with black-box alternatives
- Detect and mitigate bias within explainable workflows
- Generate narrative explanations for credit decision summaries
- Align XAI practices with regulatory requirements and audits
- Embed explainability into loan origination UIs and systems
- Implement model monitoring for drift and explanation deviations
- Build a governance playbook for XAI in credit systems
- Credit risk analysts and model validation teams
- Data scientists in financial services
- Lending platforms and fintech decision teams
- Compliance, audit, and regulatory reporting specialists
- Model governance and AI ethics officers
- Credit operations, underwriting, and risk managers
- Instructor-led walkthroughs of XAI techniques
- Hands-on SHAP/LIME and counterfactual labs
- Scorecard vs black-box model comparison exercises
- Bias testing and fairness evaluation workshops
- Integration labs with loan origination systems
- Case studies from global banks under ECOA & EU AI Act
- Peer groups to draft XAI compliance and governance plans
- Overview: need for explainability under modern AI adoption
- Key XAI methods: SHAP, LIME, counterfactual, rule extraction
- Lab: SHAP explanations on a simple credit scoring model
- Case study: regulatory dilemmas in black-box lending
- Building interpretable scorecards (logistic, decision trees)
- Creating surrogate models to explain complex classifiers
- Lab: rule-extraction on an XGBoost model
- Discussion: transparency vs performance tradeoffs
- Defining fairness metrics: demographic parity, equal opportunity
- Using XAI to uncover biased feature importance
- Lab: detect and mitigate bias using threshold adjustments
- Group exercise: fairness audit report for a credit model
- Translating model outputs into consumer‑friendly language
- Generating counterfactual 'what-if' explanations
- Lab: build explanation UI sample for loan officer
- Workflow: embedding XAI insights into origination pipelines
- Regulatory lens: ECOA, Basel Principles, EU AI Act
- Explanation logging, model validation, and audit metrics
- Lab: build a drift-monitoring and explanation-check pipeline
- Final group task: compose XAI governance playbook
Group & Corporate Discounts: Available for companies enrolling multiple participants to help maximize ROI. Individual Discounts: Offered to self-sponsored participants who pay in full and upfront. Registration Process: Corporate nominations must go through the client’s HR or Training department. Self-nominations must be prepaid via the “payment by self” option. Confirmation: All registrations are subject to DIXONTECH’s approval and seat availability. Refunds: Provided in case of course cancellation or no seat availability. Tax Responsibility: Clients are responsible for any local taxes in their country.