Back to Insights

Market Insight

AI Governance in Investment Management: Insights from the Industry

6 March 2026

Artificial Intelligence (AI) is no longer a distant concept on the horizon of investment management, it’s already here, reshaping how teams work, make decisions, and manage risk.

At a recent industry panel, leading experts including Courtney Posner (Lowenstein), Nick Platt (Kudu Investments), Stephanie Sirois and Wendy Beer (Channel Diligence), Jim Leahy (Orical), Mike Fastert (Altaira), Justin Wood (Stone Coast), Bill Saltus (Morgan Stanley), Dan Katz (Quadrangle), and Michelle Noyes from the Alternative Investment Management Association (AIMA), came together to share insights on how investment managers, allocators, and compliance teams are navigating this rapidly evolving landscape.

They explored practical strategies, lessons learned, and the key themes that are helping firms harness AI responsibly while keeping human judgment and oversight at the centre of decision-making.

1. AI Adoption is Accelerating

AI use in asset management is growing rapidly. Firms delaying adoption risk falling behind, while unstructured adoption can create operational and regulatory risks. Asset managers need to think about AI risk differently, prioritising governance and accountability.

2. Governance is Essential

AI governance has moved from informal exploration to formal expectation. Investors and operational due diligence teams now expect written policies detailing permissible uses, oversight processes, and controls over third-party tools.

3. Regulatory and Compliance Implications

AI interacts with existing regulations, including fiduciary duties, privacy laws, and insider trading rules. Even AI tools not directly used for investment decisions may trigger obligations around data protection, personally identifiable information (PII), and material non-public information (MNPI).

4. Risk Monitoring and Accuracy

Small errors in AI outputs can escalate quickly. Firms must monitor for bias, drift, and accuracy while maintaining human oversight, especially in hiring, performance evaluation, and investor communications.

5. Vendor and Enterprise AI Oversight

Third-party AI tools must be secure, ring-fenced, and closely monitored. Clear contracts and technical safeguards are essential to protect firm data and maintain accountability.

6. Cross-Functional Accountability

While Chief Compliance Officers often oversee AI governance, effective management requires collaboration across legal, IT, operations, and investment teams. Roles, responsibilities, and escalation protocols should be clearly defined.

Practical Takeaways

As AI continues to reshape the financial services landscape, firms need clear, actionable steps to manage both opportunity and risk. The following best practices provide a framework for responsible AI adoption, helping organisations safeguard decision-making, maintain compliance, and build trust with clients and stakeholders.

  • Document an AI Policy: Define permissible uses, oversight mechanisms, and interim controls.
  • Maintain Human Oversight: Ensure human judgment guides AI-influenced decisions.
  • Monitor Accuracy and Bias: Regularly test models to prevent drift or unintended bias.
  • Audit Vendors: Confirm security, ring-fencing, and regulatory compliance.
  • Be Transparent: Reflect actual AI use in marketing, disclosures, and internal documentation.
  • Train Employees: Ensure staff understand AI risks, proper usage, and escalation processes.

A Look at AI in Hedge Fund Operations

A 2025 industry survey by AIMA found that 95% of surveyed hedge funds are now using generative AI in some capacity, marking a significant increase compared with previous years.

During the discussion, participants explored how AI is being integrated across hedge funds and their investment operations, highlighting key considerations for governance, risk, and practical implementation. From vendor management and operational applications to investment workflows and human oversight, the conversation underscored the importance of structured governance, careful vendor controls, and phased adoption to ensure accuracy, compliance, and value creation.

Governance, Risk, and Oversight

AI costs should only be charged to the fund if it benefits the fund directly, ensuring transparency and fairness for investors. This means distinguishing between firm-wide AI initiatives versus fund-specific applications, and documenting the rationale for any cost allocations.

Legal documents like Limited Partnership Agreements (LPAs) should be carefully reviewed to ensure AI usage aligns with contractual obligations. LPAs may include specific clauses around fund expenses, fiduciary responsibilities, and operational limits. Reviewing these documents helps confirm that AI implementation, cost allocation, and workflow changes comply with existing agreements and regulatory expectations.

Vendor and Service Provider Management

Managing AI use through third-party vendors is critical to maintaining security, compliance, and operational integrity. Firms should exercise control over how external providers use AI, particularly when handling sensitive documents or confidential fund information. Clear boundaries, oversight, and regular monitoring help prevent data misuse or unintended risk exposure.

To formalise these controls, it is important to incorporate NDAs and contractual safeguards. These measures ensure that vendors understand their responsibilities, comply with regulatory requirements, and follow agreed-upon protocols. By embedding robust vendor management practices, firms can leverage external AI capabilities safely while protecting fund assets and investor interests.

Operational Applications

AI is becoming a trusted partner in the daily work of investment teams, helping people focus on what they do best. In legal, it handles repetitive tasks like contract review and KYC checks, freeing teams to make judgement calls that really matter. Operations teams use AI to process and reconcile data faster, while investor relations can generate customised reports in moments, giving them more time to engage personally with clients.

But it’s people who bring it all together. Human oversight ensures AI outputs are accurate, compliant, and applied thoughtfully. By combining technology with human expertise, teams can work smarter, respond faster, and deliver better outcomes for clients.

Investment Applications

The panellists highlighted how AI is increasingly functioning like a team of junior analysts, supporting investment professionals by summarising research, tracking news, and identifying trends more efficiently.

The quality of the outputs, however, depends on the quality of the inputs. Structured data and a unified technology environment are essential and without them, even the most sophisticated AI can produce flawed or misleading results.

Human Oversight and Talent

Despite AI’s growing capabilities, human validation remains essential. Regulators, including the SEC, are themselves using AI, highlighting the importance of careful oversight and accountability.

To support responsible AI integration, firms are increasingly bringing in tech specialists, user experience designers, and operational staff. These roles ensure that AI tools are implemented effectively, workflows remain efficient, and outputs are accurate. This approach combines technological innovation with human judgment, enabling teams to work smarter and deliver better outcomes for clients.

Harnessing AI Responsibly: Governance, Oversight, and Best Practices

AI is transforming investment management, offering teams new levels of efficiency, insight, and scalability. Yet, it’s important to remember that AI is a tool, not a replacement for human judgment. Successful deployment relies on maintaining human oversight, focusing on clearly defined use cases, and leveraging existing vendor partnerships before seeking new solutions. Teams also need to prioritise accuracy, change management, and staff adoption while preparing for the next stage of AI, agentic systems performing autonomous tasks with operational, legal, and fiduciary implications.

Caution is key. The age-old principle of “garbage in, garbage out” still holds true, and experimentation without oversight can easily mislead decisions. Promises of rapid deployment are often unrealistic, making strong governance the foundation for success. Firms that embed robust controls and careful oversight can harness AI responsibly, turning technological innovation into meaningful outcomes while managing operational, regulatory, and fiduciary risks.

Subscribe to get our investment managers' best insights here >>

Get our latest updates and investment manager performance reports

Subscribe