Model & AI Policy
Excerpt
SEE THE NEXT MOVE LTD
Model & AI Risk Governance Policy
Author: Gilles Bonelli
Effective Date: 01 January 2024
Version: 1.0
1. Purpose of this Policy
This policy sets out how SEE THE NEXT MOVE LTD governs the design, development, and deployment of Artificial Intelligence (AI) and statistical models, including Generative AI agents built using large language models (LLMs), with a focus on maintaining compliance with the expectations of regulated institutions—primarily banks, insurers, and other financial services firms.
SEE THE NEXT MOVE LTD develops Custom GPTs that may be used by these institutions to support internal decision-making, risk analysis, business planning, or stakeholder engagement. This policy outlines how we embed risk controls, documentation standards, ethical guidelines, and governance procedures throughout the AI lifecycle to support clients’ regulatory compliance.
It ensures alignment with regulatory frameworks such as:
-
PRA Supervisory Statement SS1/23 (Model Risk Management Principles for Banks)
-
FCA Consumer Duty and DP5/22 (Artificial Intelligence in Financial Services)
-
SR 11-7 (Federal Reserve guidance on model risk)
-
EU AI Act (Final text published March 2024)
2. Scope and Applicability
This policy applies to:
-
All Custom GPTs and AI systems developed and published by SEE THE NEXT MOVE LTD
-
GPT-based tools, agents, and copilots used by clients in banking, insurance, and financial services
-
AI solutions used to generate forecasts, advice, analysis, or recommendations, particularly in regulated decision environments (e.g., credit, risk, fraud, KYC, investment, planning)
It covers the entire model lifecycle: from concept, development, testing, deployment, and monitoring to decommissioning.
3. Governance and Accountability
SEE THE NEXT MOVE LTD maintains a formal governance structure for AI and model risk:
-
Ownership: Gilles Bonelli, as Founder and Director, is accountable for all model governance practices.
-
Review Cycle: This policy and all model controls are reviewed annually or when prompted by regulatory or operational change.
-
Advisory Input: The company engages independent subject matter experts in data privacy, model risk, and responsible AI to validate high-risk deployments.
We ensure that every GPT developed is traceable to its design intent, development rationale, and deployment context, as required by leading supervisory expectations.
4. Risk Management Framework
A formal Model and AI Risk Framework governs all GPTs built by SEE THE NEXT MOVE LTD. It includes the following components:
a. Risk Classification
Each GPT is categorised as low, moderate, or high risk based on:
-
Intended use (informational, advisory, or decision-support)
-
Operational impact
-
Degree of autonomy
-
User profile (internal analyst vs. external customer-facing tool)
b. Control Requirements
Each tier is linked to specific control requirements, including validation, human oversight, performance monitoring, auditability, and documentation.
c. Integration of Risk into Design
Design templates for GPTs include built-in checkpoints for:
-
Bias identification
-
Use-case scoping
-
Escalation triggers
-
Explainability constraints
-
Data integrity expectations
5. Policy, Standards, and Procedures
SEE THE NEXT MOVE LTD maintains internal standards for:
-
Model documentation: describing purpose, logic, training context, and known limitations
-
Validation procedures: including adversarial testing, red-teaming, and third-party review
-
Deployment protocol: enforcing access restrictions and usage disclaimers
-
Maintenance planning: including expiry dates, review triggers, and deprecation guidelines
We ensure each GPT has a fully traceable development record, compliant with the governance and validation documentation requirements under SR11-7 and PRA SS1/23.
6. Data Governance and Input Controls
To ensure model reliability and compliance with data regulations, the following are enforced:
-
Data Lineage: All prompts, grounding documents, and retrieval systems are documented and version-controlled.
-
Privacy and Security: Personally identifiable information (PII) is neither used nor retained in Custom GPT logs unless explicitly anonymised and authorised.
-
Data Quality: Sources are vetted for credibility, representativeness, and regulatory reliability (e.g., public financial reports, climate disclosures, regulatory filings).
We align with FCA expectations for data governance and the EU AI Act’s focus on input quality and traceability.
7. Explainability, Transparency, and Human Oversight
In accordance with PRA and EU AI Act principles, all Custom GPTs are built to ensure:
-
Explainability: Each GPT includes a Model Card detailing how it operates, when to use it, and what its limitations are.
-
Transparency: Users are clearly informed that they are interacting with an AI system, and not a licensed human advisor.
-
Oversight: High-risk GPTs include mechanisms for escalation to human analysts or risk professionals.
SEE THE NEXT MOVE LTD does not enable fully autonomous decision-making in financial risk domains. All GPT outputs are explicitly advisory unless integrated by clients into pre-approved workflows with human review.
8. Validation and Independent Review
All GPTs undergo:
-
Technical validation to assess prompt scope, retrieval accuracy, and stability across edge cases
-
Bias and fairness checks, particularly for models that touch on creditworthiness, risk rating, investment selection, or hiring-related analytics
-
Independent review for models deployed in regulated sectors or generating high-risk outputs
Where third-party models or plugins are integrated, a conformity assessment is performed, in line with the EU AI Act’s requirements for high-risk systems.
9. Monitoring, Logging, and Continuous Review
SEE THE NEXT MOVE LTD maintains automated and manual monitoring processes for all live Custom GPTs:
-
Usage Logging: Prompts and outputs are logged (pseudonymised) for review and failure case detection
-
Performance Monitoring: Monthly audits of GPT accuracy, hallucination rates, and user-reported issues
-
Feedback Loops: User feedback is reviewed and integrated into design improvements or risk control patches
Custom GPTs are updated, restricted, or retired based on performance degradation or regulatory change.
10. Regulatory Alignment and Compliance Assurance
SEE THE NEXT MOVE LTD ensures alignment with the following frameworks in practice:
-
PRA SS1/23: by maintaining a full model inventory, embedding lifecycle governance, and ensuring independent review of models used in financial decision-support
-
FCA Principles and Consumer Duty: by maintaining transparency, avoiding harmful bias, and ensuring that AI advice is presented in a non-misleading, user-centric manner
-
SR11-7 (Fed): by applying controls over model development, monitoring, validation, and governance, regardless of whether statistical or generative in nature
-
EU AI Act: by classifying AI systems, restricting high-risk uses without conformity assessment, and embedding human oversight in all advisory GPTs
11. Communication and Policy Dissemination
This policy is:
-
Embedded within the configuration documentation of all published GPTs
-
Available upon request for due diligence or third-party risk assessment processes
-
Updated regularly and version-controlled
Disclosures are made transparent for users and clients, with plain language summaries provided where necessary.
12. Contact and Escalation
SEE THE NEXT MOVE LTD maintains a direct escalation pathway for any concerns, including regulatory inquiries, data requests, or risk observations related to any published GPT.
Contact:
Gilles Bonelli, Founder & Director
Email: gilles.bonelli@seethenextmove.ai
Subject: Model Governance Escalation