Article

The role of data and analytics in model risk management

Subhashis Manna
By:
Subhashis Manna
1440x658px Hero Banner AdobeStock 541683134
Contents

Navigating technical and regulatory challenges

Model risk management (MRM) is now a vital pillar in the risk strategies of financial institutions. As reliance on complex models grows, so does the need to manage the risks they pose. This article explores how data and analytics strengthen MRM practices while helping firms navigate evolving regulations like Basel III, IFRS 9, and SS1/23.

The regulatory framework for MRM 

  1. Basel III: Developed by the Basel Committee on Banking Supervision (BCBS), the Basel III framework sets global standards for capital adequacy and risk management. While it doesn’t explicitly require model risk management, it places strong emphasis on robust risk controls, including those governing model use. Banks must show that their models are accurate, with sound validation processes in place. Under the Internal Capital Adequacy Assessment Process (ICAAP), institutions are expected to assess all material risks, including model risk. Those using internal models must support them with back-testing and stress-testing to prove their reliability. Successful Basel III implementation depends heavily on the quality, consistency, and traceability of data. Weak data governance has led to regulatory action in multiple jurisdictions, particularly when capital models relied on undocumented or untraceable data. Between 2020 and 2023, several fines across the EU and Asia highlighted that meeting Basel III requirements is as much about strong data practices as it is about capital strength. 
  2. IFRS 9: The International Financial Reporting Standard 9 (IFRS 9) addresses the accounting for financial instruments, including expected credit losses (ECL). As ECL calculation is model-driven, model accuracy and reliability become critical. Institutions must validate models using out-of-sample testing, scenario analysis, and sensitivity checks. Academic studies (e.g., Giesecke et al., Stanford University) have shown that small misestimations in ECL inputs can have disproportionate balance sheet impacts. Hence, IFRS 9 enforcement implicitly demands model governance, continuous validation, and data transparency.
  3. UK’s S166 and SS1/23: In the UK, the Prudential Regulation Authority (PRA) issued supervisory statement SS1/23, outlining clear expectations for model governance, validation across the model lifecycle, performance monitoring, and thorough documentation. When significant concerns arise, the PRA can invoke Section 166 to commission independent reviews. SS1/23 places strong emphasis on model traceability, clear documentation of model lineage, and interpretability. Institutions are expected to track changes in parameter assumptions, justify overrides, and maintain audit trails within their data platforms to ensure transparency and accountability.

The role of data and analytics in MRM

The genesis of MRM lies in the availability and reliability of high-quality data. Without trustworthy input data, even the most sophisticated models yield erroneous outputs. In fact, model efficacy—defined as both predictive accuracy and regulatory fitness—is directly proportional to the quality, granularity, and completeness of input data.

  1. Data as the foundation of robust MRM:
    Effective model risk management begins with high-quality input data that is accurate, consistent, and truly representative. Poor data quality can lead to biased, unstable, or underperforming models—undermining both decision-making and regulatory compliance. The Basel Committee’s BCBS 239 principles underscore the importance of sound data governance, highlighting attributes like timeliness, completeness, and adaptability—all of which are critical to successful MRM implementation. To ensure data integrity across fragmented systems, institutions are increasingly leveraging Master Data Management (MDM), data lineage tools, and AI-powered anomaly detection. Notably, a 2022 study published in the Journal of Financial Data Science found that banks with centralised data governance frameworks experienced a marked reduction in model override incidents during internal audits. In parallel, regulatory  scrutiny now extends to the traceability of model outputs to well-governed data sources. Shortcomings in this area—particularly in retail credit and securitisation models have resulted in stricter capital add-ons and binding supervisory conditions.
  2. Model development and validation:
    Model development typically begins with accurate historical data, followed by feature engineering, algorithm selection, and parameter tuning. Increasingly, firms are turning to Bayesian Networks (BNs), particularly in operational and credit risk modelling, due to their ability to combine expert judgment with empirical data in a causal framework. As demonstrated by Neil, Fenton & Tailor (2005), this approach offers greater transparency compared to black-box machine learning models.

    Robust validation involves testing models against holdout datasets, applying stress scenarios, and running sensitivity analyses—often using techniques like Sobol indices. Leading institutions reinforce these practices with Independent Model Validation Units (IMVUs), which function separately from development teams to ensure objectivity and regulatory compliance.
  3. Stress testing and scenario analysis:
    Stress testing is integral to identifying tail risks and testing model resilience under extreme conditions. Basel guidelines and the EBA stress testing framework require scenario design rooted in both historical crises and forward-looking macroeconomic variables. Scenario engines now leverage simulation models combined with real-time market feeds to generate dynamic outcomes. For instance, Shevchenko & Wüthrich (2009) introduced Bayesian updating techniques for stress environments, allowing posterior adjustments as new data becomes available. This reduces static scenario reliance and increases adaptability.
  4. Model monitoring and performance metrics:
    Continuous monitoring ensures models stay fit-for-purpose. Institutions track performance metrics such as Population Stability Index (PSI), characteristic stability, Gini coefficient, and lift. Monitoring also involves data drift and concept drift detection—common in ML-based models. Techniques like Kullback-Leibler divergence or Wasserstein distance are used to compare incoming data distributions with training data. Alerts are generated when drift exceeds regulatory thresholds, triggering revalidation protocols. These analytics are often embedded in enterprise model risk dashboards using cloud-native platforms.
  5. Integration with business processes:
    MRM delivers real value only when it’s embedded into day-to-day business decisions. For instance, IFRS 9 Expected Credit Loss (ECL) models should go beyond financial reporting and be integrated into loan origination systems to guide product pricing and approval limits. Likewise, capital allocation models should help inform treasury decisions, not just serve regulatory reporting. Advanced techniques like agent-based modelling and real-time scenario analysis—such as Monte Carlo simulations of macroeconomic events are increasingly used to support strategic planning. Institutions with well-integrated MRM frameworks have reported stronger performance, including higher return-on-risk-weighted assets (RoRWA), according to recent academic studies.

Challenges and best practices

Challenges

  1. Data management complexity: With inputs drawn from heterogeneous sources—loan systems, CRM tools, macroeconomic feeds—ensuring harmonised, lineage-tracked, and de-duplicated data is non-trivial.
  2. Model overfitting: Sophisticated ML models can overfit training data and underperform in live environments. This is particularly dangerous for low-default portfolios (LDPs) where training data is sparse.
  3. Regulatory compliance: Evolving regulatory guidance, such as SR 11-7 (US) or SS1/23 (UK), demands not just modeling expertise but compliance workflows, governance documentation, and internal audit trails.

Best practices

  1. Implement strong data governance: Centralise data policies, enforce golden source definitions, and document data flows using metadata catalogues.
  2. Adopt robust validation frameworks: Employ out-of-time and out-of-sample tests, Bayesian model averaging, and integrate scenario trees to validate under multiple future paths.
  3. Foster a risk-aware culture: Establish cross-functional MRM committees, incentivise transparent override documentation, and regularly educate business lines on model limitations.

Conclusion

In today’s complex financial risk environment, MRM is no longer just a regulatory requirement—it is a strategic advantage. Institutions that embrace cutting-edge techniques such as Bayesian modelling, automated validation, and robust, lineage-driven data governance are not only better equipped to satisfy regulatory demands but also to make smarter, faster, and more informed decisions. At the core of this transformation lies one non-negotiable foundation: high-integrity data. Without reliable, traceable, and consistent data, both compliance and competitive edge begin to erode. When data and analytics are applied with scientific rigour and operational discipline, MRM evolves from a box-ticking exercise into a critical enterprise capability—powering resilience, agility, and long-term value creation.

Santanak Datta, Manager, Grant Thornton Bharat, has also contributed to this article.

Learn more about how our Data and Analytics Advisory services can help you
Know more
Learn more about how our Data and Analytics Advisory services can help you