Back to Blog
Operations10 min read

Vendor Due Diligence for AI Tools: A Practical Checklist

A comprehensive framework for evaluating AI vendors, from data security to model transparency, ensuring your firm remains compliant and protected.

Published on 2025-11-25

Vendor Due Diligence for AI Tools: What RIAs Must Know

Buying an AI tool is not like buying a CRM. The risks are different, the questions are harder, and the SEC's expectations are higher. If you're evaluating AI vendors, you need a due diligence process that goes beyond the standard vendor questionnaire.

Why AI Vendor Due Diligence Is Different

Traditional software vendors provide tools that execute your instructions. AI vendors provide tools that make decisions or generate content on your behalf. That's a fundamentally different risk profile.

The Core Difference

Traditional Software:

  • You tell it what to do, it does it
  • Predictable, rule-based behavior
  • Errors are usually obvious and traceable

AI Software:

  • You give it a task, it figures out how to do it
  • Adaptive, probabilistic behavior
  • Errors can be subtle, context-dependent, and hard to detect

Regulatory Implication: When AI makes a mistake, you're still responsible. That means your due diligence needs to be more rigorous.

The SEC's Expectations

The SEC has made it clear that you cannot outsource your fiduciary duty. That includes due diligence on AI vendors.

In their June 2023 Risk Alert on AI and Predictive Analytics, the SEC highlighted several areas where firms fell short:

  • Insufficient understanding of how AI tools work
  • Inadequate testing of AI outputs for accuracy and bias
  • Lack of ongoing monitoring of AI tool performance
  • Failure to assess conflicts of interest in AI vendor relationships

Bottom line: The SEC expects you to know what you're buying, how it works, and whether it's appropriate for your clients.

The AI Vendor Due Diligence Framework

Phase 1: Initial Screening

Before you even schedule a demo, answer these questions:

1. What Problem Does This Tool Solve?

Be specific. "Improve efficiency" is not an answer. "Reduce the time our advisors spend drafting client meeting summaries by 50%" is an answer.

Why this matters: If you can't articulate the problem, you can't evaluate whether the tool actually solves it.

2. Is AI Actually Necessary?

Some vendors slap "AI-powered" on basic automation to justify higher prices. Ask yourself: Could this be done with traditional software?

Red flag: The vendor can't explain why AI is required for the task.

3. What's the Regulatory Risk?

Different AI use cases carry different regulatory risks:

  • Low risk: AI helps draft internal meeting notes
  • Medium risk: AI analyzes client data to identify planning opportunities
  • High risk: AI generates investment recommendations or client-facing content

Rule of thumb: The closer AI gets to client-facing advice, the higher the due diligence bar.

Phase 2: Vendor Assessment

Once you've identified a serious candidate, dig deeper.

1. How Does the AI Work?

You don't need to understand the math, but you do need to understand the logic.

Questions to ask:

  • What data does the AI use to generate outputs?
  • How was the AI trained?
  • What assumptions or biases might be baked into the model?
  • Can the AI explain its reasoning, or is it a "black box"?

Red flag: The vendor can't (or won't) explain how the AI works in plain English.

2. What Are the Known Limitations?

Every AI tool has limitations. A good vendor will tell you what they are. A bad vendor will pretend they don't exist.

Questions to ask:

  • In what scenarios does the AI perform poorly?
  • What types of errors has the AI made in the past?
  • What safeguards are in place to catch errors?
  • How often does the AI require human correction?

Red flag: The vendor claims the AI is "always accurate" or "never makes mistakes."

3. How Is the AI Monitored and Updated?

AI models can degrade over time as market conditions change or data quality shifts. How does the vendor ensure ongoing accuracy?

Questions to ask:

  • How often is the AI model retrained or updated?
  • How do you monitor for model drift or performance degradation?
  • What happens if the AI starts producing inaccurate outputs?
  • How quickly can you roll back to a previous version if needed?

Red flag: The vendor has no formal process for monitoring AI performance.

4. What Data Does the AI Access?

AI tools often require access to sensitive client data. You need to understand what data the AI uses, how it's stored, and who can access it.

Questions to ask:

  • What client data does the AI need to function?
  • Where is that data stored (on-premises, cloud, vendor servers)?
  • Who at the vendor has access to client data?
  • Is client data used to train the AI model? (If yes, this is a major red flag.)
  • How is data encrypted in transit and at rest?

Red flag: The vendor is vague about data handling or can't provide clear answers.

5. What Are the Vendor's Security and Compliance Standards?

AI vendors should meet the same security standards as any other critical vendor.

Questions to ask:

  • Do you have SOC 2 Type II certification?
  • Have you completed a third-party security audit?
  • What is your incident response plan?
  • Do you have cyber insurance?
  • Are you willing to sign a Business Associate Agreement (BAA) if required?

Red flag: The vendor can't provide evidence of security certifications or audits.

6. What Happens If the Vendor Goes Out of Business?

AI startups fail. What's your contingency plan?

Questions to ask:

  • Can we export our data if we terminate the contract?
  • Do you offer source code escrow?
  • What is your financial stability? (Ask for funding details or financial statements if appropriate.)

Red flag: The vendor has no exit plan or data portability options.

Phase 3: Pilot Testing

Before you roll out an AI tool firm-wide, run a controlled pilot.

1. Define Success Metrics

What does success look like? Be specific.

Examples:

  • "Reduce time spent on task X by 30%"
  • "Achieve 95% accuracy on output Y compared to human baseline"
  • "Zero client complaints related to AI-generated content"

2. Test for Accuracy

Compare AI outputs to human-generated outputs. Look for:

  • Factual errors
  • Misleading or ambiguous language
  • Bias or inappropriate tone
  • Compliance red flags

3. Test for Edge Cases

AI tools often perform well on common scenarios but fail on edge cases. Deliberately test unusual or complex situations.

Example: If you're testing an AI tool that drafts client emails, try scenarios like:

  • A client with a unique tax situation
  • A sensitive topic (e.g., death of a spouse)
  • A client who prefers formal vs. casual communication

4. Gather User Feedback

Ask your team:

  • Is the AI output useful?
  • How much editing is required?
  • Does it save time, or does it create more work?
  • Would you trust this tool with client-facing tasks?

Phase 4: Ongoing Monitoring

Due diligence doesn't end when you sign the contract. You need ongoing oversight.

1. Regular Performance Reviews

Schedule quarterly reviews to assess:

  • AI output accuracy
  • User satisfaction
  • Compliance incidents or near-misses
  • Vendor updates or changes to the AI model

2. Vendor Relationship Management

Maintain regular communication with the vendor. Ask about:

  • Planned updates or changes to the AI
  • New features or capabilities
  • Security incidents or vulnerabilities
  • Regulatory developments affecting the tool

3. Audit Trails

Maintain records of:

  • When the AI tool was used
  • What outputs it generated
  • Who reviewed the outputs
  • What changes were made before client delivery

This creates a defensible record if regulators or clients question your process.

Red Flags That Should End the Conversation

Some vendor behaviors are disqualifying. If you encounter any of these, walk away:

  1. The vendor can't explain how the AI works in plain English
  2. The vendor claims the AI is "always right" or "never makes mistakes"
  3. The vendor refuses to provide security certifications or audit reports
  4. The vendor uses client data to train AI models without explicit consent
  5. The vendor has no process for monitoring AI performance or correcting errors
  6. The vendor is evasive about data handling or privacy practices
  7. The vendor pressures you to sign quickly without adequate due diligence time

The Bottom Line

AI vendor due diligence is not optional. The SEC expects you to understand what you're buying, how it works, and whether it's appropriate for your clients. Cutting corners on due diligence exposes you to regulatory risk, reputational risk, and operational risk.

Key Takeaway: Treat AI vendors like any other critical service provider—but with higher scrutiny. Ask hard questions, test rigorously, and don't sign a contract until you're confident the tool meets your standards.


Next Steps:

  1. Create an AI vendor due diligence checklist based on this framework
  2. Review your existing AI vendors against these criteria
  3. Identify gaps in your current due diligence process
  4. Establish ongoing monitoring protocols for AI tools

Need help building a vendor due diligence process for AI tools? Book a free 30-minute assessment call to discuss your firm's specific needs.

Ready to Build Your AI Strategy?

Get personalized guidance on implementing AI in your RIA firm with our AI Growth Roadmap Intensive.