Federal Housing Finance Agency Print
Home / Supervision & Regulation / Advisory Bulletins / Model Risk Management Guidance
Advisory Bulletin

Model Risk Management Guidance

Number: AB 2022-03
Pertains To: All

​​​​​​​​​​​​​​​​​​​​​​​​ADVISORY BULLETIN

AB 2022-03:  ​Supplemental Guidance to Advisory Bulletin 2013-07 - Model Risk Management Guidance​

[view PDF of Advisory Bulletin 2022-03]


The Federal Housing Finance Agency (FHFA) is issuing this Advisory Bulletin (AB) as supplemental guidance to FHFA AB 2013-07: Model Risk Management Guidance, published on November 20, 2013. This AB is applicable to Freddie Mac, Fannie Mae,[1]​ the Federal Home Loan Banks (FHLBanks), and the Office of Finance (OF) (collectively, the regulated entities[2]) and clarifies and expounds on various topics covered in FHFA's existing guidance.   

The intent of this AB's guidance, formatted as Frequently Asked Questions (FAQs), is to provide supplemental guidelines that will address some of the gaps in AB 2013-07 prompted by changes in model-related technologies and questions generated from the expanded use of complex models by the FHLBanks. The supplemental guidance also addresses model documentation, the communication of model limitations, model performance tracking, on-top adjustments, challenger models, model consistency, and internal stress testing. 


Since the publication of AB 2013-07, we have observed changes in model-related technologies which have prompted changes in guidance and generated questions regarding existing guidance. The advent of cloud technology and artificial intelligence/machine learning techniques have led to FHFA's issuance of specific guidance on these topics.[3] However, the issuance of that guidance has created gaps in AB 2013-07. 

In addition, the FHLBanks have increased the use of models, employing internally developed models as well as complex vendor models. Since the issuance of AB 2013-07, FHFA has also amended the regulation addressing FHLBank capital requirements[4] and issued related FHLBank guidance on modeling. Specifically, FHFA issued additional guidance on market risk models (AB 2016-02; AB 2018-01) and mortgage credit risk models (AB 2018-02).[5] The FHLBanks' expanded model use as well as recent FHFA regulations and guidance applicable to the FHLBanks have also created the need for expanded clarification of AB 2013-07.[6]    


1.  Model Risk Management Framework

1(a).  How should “less complex" entities address expectations in AB 2013-07?
Model risk management should be commensurate with a regulated entity's model use and risk exposure. AB 2013-07 provides a distinction between “complex" (Fannie Mae and Freddie Mac) and “less complex" (FHLBanks and OF) entities. Over time, the FHLBanks have expanded the scope, scale, and complexity of their modeling activities. Thus, the FHLBanks and OF should be attentive to changes in the complexity, impact, and scope of their modeling environments and modify their model risk management practices accordingly. Pointedly, the distinction between “complex" and “less complex" does not exempt “less complex" regulated entities from the expectations in AB 2013-07, but it could affect the frequency and rigor of certain model risk management practices.

1(b).  Does the existing definition of “model use" in AB 2013-07 encompass all potential model applications considering recent changes to model uses?
AB 2013-07 defines model use “as using a model's output as a key basis for informing business decision-making, managing risk, or developing financial reports." The adoption of artificial intelligence and machine learning techniques has expanded the definition of model use beyond business decision-making, risk management, and the development of financial reports. The regulated entities employ artificial intelligence and machine learning for various business processes (e.g., productivity tools such as facial recognition for access management and document digitization). 

Although FHFA has articulated expectations for risk management of artificial intelligence and machine learning in AB 2022-02: Artificial Intelligence/Machine Learning Risk Management (Feb. 10, 2022), the governance for models used for business decision-making, risk management, and financial reporting should still adhere to the expectations outlined in AB 2013-07. Models not directly used for those purposes should follow a governance framework commensurate to the risk, consistent with AB 2013-07. For example, if a model is used for scanning and digitizing documents, controls appropriate to the process should be developed to manage the risk. In addition to AB 2013-07, other appropriate FHFA guidance should be considered and applied in those instances.[7]

1(c).  What are the expectations for mapping of key dependencies on external model-related data, software, storage, and technology?
Since the publication of AB 2013-07, FHFA has observed a wider adoption of technologies in the mortgage industry. Many of these technologies reside externally to the regulated entities and are largely outside of the regulated entities' control. Examples of such technologies are cloud servers, vendor models, and external data used by the regulated entities as inputs for their models. Although FHFA has published guidance related to externally sourced technologies such as AB 2018-04: Cloud Computing Risk Management (Aug. 14, 2018) and AB 2018-08: Oversight of Third-Party Provider Relationships (Sept. 28, 2018), FHFA expects the regulated entities to take a macro-prudential view of the risks posed by externally sourced data and technologies. The regulated entities should map their external dependencies to significant internal systems and processes to determine their systemic dependencies and interconnections. In particular, the regulated entities should have an inventory of key dependencies on externally sourced models, data, software, and cloud providers. This inventory should be regularly updated and reviewed by senior management and presented to the board of directors, as deemed appropriate.

1(d).  How should a regulated entity treat processes or components of modeling processes that incorporate qualitative elements or judgements? 

AB 2013-07, in its definition of models, covers quantitative approaches whose inputs are partially or wholly qualitative or based on expert judgment, provided that the output is quantitative in nature. 

2.  Model Documentation

2(a).  What elements should the regulated entities' model use policies and procedures include to ensure that model documentation is sufficient?
For all model uses, a regulated entity should have policies and procedures in place to ensure model owners compile and maintain comprehensive model documentation that is sufficiently detailed to enable a qualified third party to independently operate and maintain a model for each model use. A regulated entity's processes should be designed and operated reliably to maintain comprehensive model documentation that is complete prior to the independent model validation for a specific use. A regulated entity should have processes in place for revising or augmenting the documentation based on the results of the model validation prior to model implementation. Procedures and policies that require updates to model documentation are important to memorialize all model components correctly and comprehensively for each model use. 

2(b).  How should a regulated entity address and mitigate the risks associated with model limitations across the model lifecycle?
The regulated entities should clearly document significant model limitations within the model documentation, along with any root causes and mitigation strategies where appropriate. A regulated entity should document and clearly communicate to the model user community model limitations identified during model development and model validation. Model limitations do not only arise from technical limitations. Limitations arise in part from weaknesses in the model because of its various shortcomings, approximations, and uncertainties. Limitations are also a consequence of assumptions underlying a model that may restrict the scope of appropriate use to a limited set of specific circumstances and situations. Decision makers need to understand the limitations of a model to avoid using it in ways that are not consistent with the original intent.

3.  Model Validation Program

3(a).  Should a regulated entity's internal model validation guidelines provide specific standards for an independent validation?
A regulated entity's internal model validation guidelines and practices should align with AB 2013-07's specific standards to ensure independent review and challenge to model assumptions, mathematical formulae, and inputs. The internal guidelines should include a sufficient level of detail to ensure that qualified experts perform the review at a sufficient breadth and depth. Further, the model validation report should include thorough descriptions of these reviews and relevant outcomes. An independent model validation should extend beyond an affirmation of the model's correctness and reasonableness. 

3(b).  How should the regulated entities evaluate third-party model validations?
When using an external vendor to complete an independent model validation, the regulated entity's model validation group is accountable for the quality, recommendations, and opinions of any third-party review. When evaluating a third-party model validation, a regulated entity should implement model risk management policies and practices that align the vendor-completed specific standards for an independent validation with the specific standards included in AB 2013-07. 

3(c).  How should model validation findings and other model risk issues be monitored and reported?
A regulated entity should establish processes for monitoring the remediation status of identified model validation findings and other model risk issues and for providing reports to senior management and management-level committees. Findings and issues with production models that are significant in nature should be governed in accordance with the regulated entity's issues management program.  

3(d).  What are acceptable practices for effective challenge?
Model risk management policies, as AB 2013-07 notes, should include acceptable practices for “effective challenge" of models. Effective challenge involves critical analysis by independent, informed parties who can identify model limitations, evaluate assumptions, and recommend appropriate changes. The efficacy of effective challenge depends on a combination of incentives, competence, and influence. For example, effective challenge requires that the regulated entities invest human capital resources in qualified personnel and ensure the distinct separation of the model challenge process from the model development process. In addition, the regulated entity should foster a corporate culture where senior levels of management give those responsible for effective challenge processes explicit authority, support, and stature within the organization. 

3(e).  Do challenger or benchmark models play a role in the effective challenge of models?
The regulated entities should have a well-developed effective challenge process in place to assess the effectiveness of models and the reasonableness of key assumptions. This may include a champion-challenger framework in which challenger models give an alternative perspective to a primary, or champion model, and provide a point of comparison allowing for analysis of model results and sensitivity of the output. It is desirable that potential challenger models are well vetted, and employ alternative approaches to estimation, which may include theoretical or methodological differences from the primary model. Effective challenge should be in place at all levels of estimation where model or estimation risk is affected – this includes overall loss estimates, component level estimates, assumptions, and component level inputs. The regulated entities should document the effective challenge process as well as any changes that result from it and the rationale for their decisions.

Although benchmark models may never be considered to be replacements for the primary model, they provide a point of comparison for understanding how the primary model results differ from other widely-referenced available models used in industry. Benchmark models may also aid in understanding the primary model.   

3(f).  What should a regulated entity consider when deciding if an end-user computing tool (EUC) or calculator should be subject to the guidance set forth in AB 2013-07?
The increase in the complexity and reliance on EUCs and calculators to carry out critical financial operations has also fostered the requirement for enhanced EUC/calculator risk mitigation.  For example, a regulated entity should classify a significant or important EUC, calculator, or other data generating process as a model if the EUC, calculator, or process (1) feeds into or out of a model; (2) makes assumptions; and/or (3) incorporates thresholds or quantitative methodologies. Additionally, EUCs and calculators may be integrated into broader modeling processes. When applicable, a regulated entity should also treat integrated EUCs and calculators as models and subject the EUCs, calculators, or processes to model validations and governance in accordance with the frequency and rigor outlined in the regulated entity's model risk management policies and procedures. A regulated entity that includes EUCs and calculators as part of the broader modeling process is likely already subjecting those EUCs and calculators to the guidance set forth in AB 2013-07.

4.  Model Control Framework

4(a).  How is model performance tracking integral to the model control framework?
A regulated entity should have policies and procedures in place for ongoing model performance tracking (MPT) for each significant model use prior to model production implementation. Performance tracking preemptively ensures model integrity through the business cycle. Properly designed model performance tracking metrics, thresholds, and alerts provide the model diagnostics necessary to identify and measure sources of model error. Model diagnostics are intended to capture model performance degradation timely and facilitate the appropriate corrective action.

MPT metrics and thresholds should be tied to both downstream use effects and a model's integrity as measured by the accuracy of the key outputs. Model owners are expected to involve model users and model risk management teams to ensure MPT metrics are appropriate, and thresholds are set below the risk tolerance of the business unit.

4(b).  What should a regulated entity consider when establishing thresholds for model performance tracking?
Ongoing model performance tracking should include well-supported and documented thresholds and procedures for responding to outputs outside these thresholds. A regulated entity should select, fully document, and reevaluate, on an ongoing basis, thresholds for each significant model use. As models alone do not drive these business decisions and risk management, model performance thresholds and alerts should be set at a level below the point where model error approximates or equals management risk limits or risk appetite.

4(c).  Should model performance tracking include an evaluation of model adjustments?  
Ongoing model performance tracking should also include monitoring and analysis of any model overrides, on-top adjustments, recalibration, and use of (or changes to) tuning parameters. This monitoring should include documented, ongoing analysis establishing that any adjustments are appropriate for the model uses to which they are applied.

4(d).  How should a regulated entity use model performance tracking metrics and reports?
MPT results show the model's reasonableness, robustness, and range with respect to its historical performance.  Backward-looking performance metrics provide a useful measure of error due to the model. In both normal and stressed economic environments, model performance reports can help identify a model's fundamental flaws or weaknesses. Model performance reports should include aggregate model errors that directly affect business decisions and risk management. Upstream models errors can propagate to downstream models which could amplify the errors.

4(e).  Should regulated entities document support for on-top adjustments that align model predictions to actual results?
Periodically, model outputs will require on-top adjustments to produce more accurate results. These adjustments can occur at the component level or be applied to the overall result depending on the need for the adjustment. The regulated entities should develop and document a clear and transparent process for determining (1) when on-top adjustments to models are needed; (2) how the adjustment will be applied; and (3) the length of time for having these adjustments in place before finding a permanent solution.

4(f).  Is it sufficient to state that assumptions or on-top adjustments are conservative?
Simply indicating that model assumptions or on-top adjustments are “conservative" is a qualitative assessment and does not provide sufficient support for a quantitative assumption or adjustment. A regulated entity should provide documentation to support significant modeling assumptions or on-top adjustments whether they are “conservative" or not.

4(g).  What role does effective challenge play in establishing on-top adjustments?
When on-top adjustments are applied, the regulated entities should document the justification for the on-top adjustment, articulate the effect of the adjustment, and state for how long it will be applied. On-top adjustments should also be subjected to effective challenge. Model risk management should also track and review on-top adjustments to get a broad view that may reveal an enterprise-wide issue.​

4(h).  How should a regulated entity manage the recurrent use of on-top adjustments?
The use of on-top adjustments should initiate a review process to determine the reason for the on-top adjustment. The recurrent use of on-top adjustments in model estimates can be an indicator of an insufficient model or process robustness and should trigger a review. This review should assess whether the causes leading to use of the on-top adjustment are temporary. If the on-top adjustment is deemed to be recurrent rather than temporary, then the model or forecast process may require updating. If updates are necessary, the regulated entities should have in place a feedback process that engages with the relevant committees, business units, or individuals in a manner that allows model owners to promptly execute any necessary updates to the models. With the continued use of on-top adjustments, a regulated entity's documentation of the need to maintain the adjustments during the next validation cycle is an important feature of any review process. Full documentation of the findings of the review process, and the rationale for any decision and outcome, is another important element concluding the review process.

4(i).  Is a regulated entity expected to incorporate modeling assumptions and inputs in the same manner across various model uses?
The regulated entities' policies and procedures should ensure that models, assumptions, and inputs, such as housing price appreciation or macroeconomic factors, are used in a consistent manner across the various financial and business practices where applicable. However, model flexibility is desirable to address circumstances in which models and assumptions cannot be used consistently. For example, if accounting rules prescribe a specific use, then the regulated entity would need a process to address that use and to evaluate and assess the effect of the inconsistency. The regulated entity should document the occurrence, the reason for the differences, and if it has a material effect, determine what steps may be needed to mitigate the effect. 

4(j).  What are model implementation risks and how can these be mitigated?
Errors can occur at any point from design through implementation, thus model risk management should include disciplined and knowledgeable development, testing and implementation processes. Data and other model inputs used to generate model results often rely on EUCs, upstream models, or other supplemental data generating processes that can be subject to human error or operational errors. A regulated entity should regularly evaluate and confirm that data or other input generating processes align with the documented model theory and have not been subject to human error. 

5.  Internal Scenario and Sensitivity Analysis and Stress Testing

5(a).  What are FHFA's model expectations for scenario analysis?
A regulated entity should use scenario analyses to assess the reliability, effectiveness, and stability of forecasts the models produce in a variety of situations and to identify potential issues with the models that can lead to inaccurate results. Scenario analysis should be distinguished from stress testing as both can be applied enterprise-wide and will often employ the regulated entities' most significant models. Internal scenario analysis and stress testing should be conducted on a recurring basis but should also be conducted as needed.

5(b).  What are FHFA's model expectations for sensitivity analyses?
Sensitivity analysis can be conducted to assess the effect of many model-related factors (e.g., variables, model specification, key assumptions, constraints on intermediate outputs such as a loss severity floor). Because models are highly influenced by underlying assumptions in forecasted values, the regulated entity should assess how different assumptions and processes can affect the estimates. The regulated entity should use realistic expectations and an approach that makes intuitive sense when stressing key variables. Sensitivity analyses should be completed for each significant component model as well as the overall model or forecast. A regulated entity should vet thresholds or criteria they use for sensitivity analysis to ensure they are meaningful and realistic.

5(c).  What are FHFA's model expectations for internal stress testing?
Stress testing is a critical tool for a regulated entity's risk management because it alerts senior management to unexpected adverse outcomes for a range of potential risks. Stress testing also may enable the regulated entity to better understand its models' expected losses by exposing model behavior or risk factor behavior that may not be otherwise realized. This may lead to reconsideration of existing model formulations that improve performance or enhance the usefulness of the model. Stress test scenarios should be designed to capture risks relevant to model predictions for each model use. Stress test scenarios should be developed using reasonable, potential scenarios and incorporate historical events and hypothetical future events, or those not observed historically, (e.g., scenarios without government intervention). Stress test scenarios should also consider potential systematic issues that may adversely affect the model's forecasts. 

A stress test is designed to simulate the effect of one or more shocks or prolonged downturns on the entire regulated entity. A “shock" is a large, sudden, adverse change in the state of the external world or the internal state of a regulated entity. A shock appears suddenly, and its effects are felt immediately. A “prolonged downturn" is a large, adverse change in the state of the world that emerges and becomes apparent slowly over time. Stress scenarios should be designed to ensure that, in the aggregate, the scenario is sufficiently stressful to challenge the risk management processes, capital, and earnings positions of the regulated entity. Scenario severity should consider countercyclical scenario design principles (i.e., a more pronounced economic downturn when current conditions are stronger and a less pronounced economic downturn when current conditions are weak).

Each scenario variable follows a predetermined path over time. For computational ease, a stress test can assume that the regulated entity has “exact foresight," a more deterministic approach where at each point in time within the planning horizon the regulated entity knows the exact path that a variable will follow. Alternatively, a stress test can assume that a regulated entity has only “incomplete foresight" – that at each point in time the regulated entity can only imperfectly forecast a variable's future path. To ensure that stress tests are realistic regarding what can be known ex ante about the future, stress tests should include incomplete foresight when feasible. Incomplete foresight incorporates a more stochastic approach to scenario generation of variables where outcomes are random or uncertain. In addition, stress tests should provide a range of potential losses in addition to point estimates, and these results should be regularly reported to senior management so that they are aware of the output uncertainties associated with models.


​Model Risk Management Guidance, FHFA AB 2013-07 (Nov. 20, 2013).

Operational Risk Management, FHFA AB 2014-02 (Feb. 18, 2014).

FHLBanks Changes to Internal Market Risk Models, FHFA AB 2016-02 (Apr. 21, 2016).

Data Management and Usage, FHFA AB 2016-04 (Sept. 29, 2016).

Information Security Management, FHFA AB 2017-02 (Sept. 28, 2017).

Scenario Determination for Market Risk Models Used for Risk-Based Capital, FHFA AB 2018-01 (Feb. 7, 2018).

FHLBank Use of Models and Methodologies for Internal Assessments for Mortgage Asset Credit Risk, FHFA AB 2018-02 (Apr. 26, 2018).

Cloud Computing Risk Management, FHFA AB 2018-04 (Aug. 14, 2018).

Oversight of Third-Party Provider Relationships, FHFA AB 2018-08 (Sept. 28, 2018).

Business Resiliency Management, FHFA AB 2019-01 (May 7, 2019).

Compliance Risk Management, FHFA AB 2019-05 (Oct. 3, 2019).

Enterprise Risk Management Program, ​FHFA AB 2020-06 (Dec. 11, 2020).

Artificial Intelligence/Machine Learning Risk Management, FHFA AB 2022-02 (Feb. 10, 2022).

12 CFR part 1236, Appendix, Prudential Management and Operations Standards

12 CFR part 1277, Federal Home Loan Bank Capital Requirements, Capital Stock and Capital Plans.

[1]​ Common Securitization Solutions, LLC (CSS) is an “affiliate" of both Fannie Mae and Freddie Mac, as defined in

the Federal Housing Enterprises Financial Safety and Soundness Act of 1992, as amended.  12 U.S.C. 4502(1), and this AB applies to it.

[2​]​ The OF is not a “regulated entity" as the term is defined in the Federal Housing Enterprises Financial

Safety and Soundness Act as amended.  See 12 U.S.C. 4502(20).  However, for convenience, references to the “regulated entities" in this AB should be read to also apply to the OF.

[3]Cloud Computing Risk Management, FHFA AB 2018-04 (Aug. 14, 2018).  Artificial Intelligence/Machine Learning Risk Management, FHFA AB 2022-02 (Feb. 10, 2022).

[4]​ 12 CFR part 1277—Federal Home Loan Bank Capital Requirements, Capital Stock and Capital Plans; see 84 Fed. Reg. 5426 (Feb. 20, 2019) (amending FHFA's regulation on FHLBank capital requirements).

[5]FHLBank Changes to Internal Market Risk Models, FHFA AB 2016-02 (Apr. 21, 2016); Scenario Determination for Market Risk Models Used for Risk-Based Capital, FHFA AB 2018-01 (Feb. 7, 2018); FHLBank Use of Models and Methodologies for Internal Assessments for Mortgage Asset Credit Risk, FHFA AB 2018-02 (Apr. 26, 2018).

[6]​ The capital rule (12 CFR part 1277—Federal Home Loan Bank Capital Requirements, Capital Stock and Capital Plans) requires the FHLBanks to use models for credit risk (as opposed to their previous reliance on credit ratings). FHFA's Division of Bank Regulation (DBR) can direct an FHLBank to revise its credit risk methodology or model to address any deficiencies identified by FHFA.​

DBR's capital rule also requires that the FHLBanks seek approval for changes to their market risk models​. A Bank making a change to a market risk model should follow the process outlined in AB 2016-02. 

[7​]​ Other appropriate FHFA guidance includes, for example:  Artificial Intelligence/Machine Learning Risk Management, FHFA AB 2022-02 (Feb. 10, 2022); Enterprise Risk Management Program, FHFA AB 2020-06 (Dec. 11, 2020); Compliance Risk Management, FHFA AB 2019-05 (Oct. 3, 2019); Business Resiliency Management, FHFA AB 2019-01 (May 7, 2019); Oversight of Third-Party Provider Relationships, FHFA AB 2018-08 (Sept. 28, 2018); Information Security Management, FHFA AB 2017-02 (Sept. 28, 2017); Data Management and Usage, FHFA AB 2016-04 (Sept. 29, 2016); Operational Risk Management, FHFA AB 2014-02 (Feb. 18, 2014).


​FHFA has statutory responsibility to ensure the safe and sound operations of the regulated entities and the Office of Finance. Advisory bulletins describe FHFA supervisory expectations for safe and sound operations in particular areas and are used in FHFA examinations of the regulated entities and the Office of Finance. Questions about this advisory bulletin should be directed to: SupervisionPolicy@fhfa.gov.   

​ ​​

© 2024 Federal Housing Finance Agency