AI Agent Disclosure: When and How Should Firms Tell Clients That AI Did the Work?

Keito Team
4 April 2026 · 10 min read

When should firms disclose AI agent usage to clients? Cover the legal requirements, ethical obligations, and a practical disclosure framework by task type.

AI Agent Cost & Billing

AI agent disclosure to clients means formally informing clients when AI agents have contributed to deliverables, decisions, or research — covering what the agent did, what a human reviewed, and how quality was assured.

The EU AI Act now mandates disclosure for specific high-risk AI use cases. Yet 84% of firms have not redesigned their workflows around AI (Deloitte, 2025). Most have no disclosure policy. They use agents daily but have never told a single client.

This is no longer a grey area in many jurisdictions. It is a legal requirement. But the real question is not whether to disclose. It is how to frame AI involvement so clients see it as an advantage, not a shortcut.

Key Takeaway: AI disclosure is legally required in many cases and competitively smart in all cases. Build a policy now.

Why Does AI Disclosure Matter Now?

Three forces have converged to make AI disclosure urgent for every professional services firm.

Regulation Has Arrived

The EU AI Act, which began phased enforcement in 2025, requires disclosure for high-risk AI systems. These include AI used in legal decisions, financial assessments, employment screening, and credit scoring. Firms operating in or serving EU clients cannot ignore this.

In the UK, the Financial Conduct Authority (FCA) and the Solicitors Regulation Authority (SRA) have issued guidance expecting transparency about AI usage in client work. Neither has banned AI. Both expect firms to tell clients when it is used.

Clients Are Asking

Procurement questionnaires now routinely include sections on AI usage. Large corporate clients want to know whether their legal research, financial models, or consulting deliverables involved AI agents. Answering “we don’t use AI” when you do is a contractual risk.

Reputation Is at Stake

Firms caught using AI without disclosure face more than regulatory penalties. They face trust damage. A consultancy that delivers an AI-generated strategy document as if a partner wrote it is one whistleblower away from a reputation crisis.

The 84% of firms that have not redesigned jobs around AI (Deloitte) are the most exposed. They are using agents informally, without governance, without disclosure, and without audit trails.

What Does the Law Actually Require?

The legal requirements vary by jurisdiction, sector, and task type. Here is where things stand.

EU AI Act

The EU AI Act classifies AI systems by risk level. High-risk systems — those used for decisions in employment, finance, law, and immigration — require mandatory disclosure to affected individuals. Firms must:

  • Inform clients when AI contributes to decisions affecting them
  • Maintain documentation of AI system capabilities and limitations
  • Ensure human oversight of high-risk AI decisions
  • Keep records that regulators can audit

Penalties for non-compliance can reach €35 million or 7% of global annual turnover, whichever is higher.

United Kingdom

The UK has no equivalent overarching AI law yet. However, sector regulators have been active. The SRA expects solicitors to disclose AI usage that materially affects client work. The FCA expects firms to be transparent about AI use in financial advice and product recommendations.

Professional negligence claims may arise if AI-generated work contains errors and the client was not informed AI was involved.

United States

The US picture is a state-by-state patchwork. Several states have enacted AI disclosure requirements for specific sectors — particularly hiring, insurance, and financial services. Federal guidance from the SEC requires disclosure of material AI risks in financial filings.

The Key Distinction: AI-Generated vs AI-Assisted

This distinction matters enormously for disclosure obligations. Work that is entirely AI-generated — where an agent produced the deliverable without meaningful human modification — typically triggers disclosure requirements. Work that is AI-assisted — where an agent supported a human who made the final decisions — may not.

The boundary is fuzzy. A consultant who asks an AI agent to draft a report, reads it, changes three sentences, and sends it to the client falls in uncertain territory. Was that AI-generated or AI-assisted? Your disclosure policy needs to answer this question clearly.

For more on audit trails that support disclosure, see our guide on AI agent activity logs and audit trails.

When Should You Disclose? The Always/Sometimes/Never Matrix

Not every AI task requires client disclosure. The framework below categorises common professional services tasks by disclosure need.

Disclosure LevelTask TypeExamplesRationale
AlwaysAI-generated deliverablesReports, research memos, legal drafts produced primarily by agentsClient receives AI output as a deliverable
AlwaysAI-made decisionsCandidate screening, risk scoring, compliance flaggingDecisions directly affect outcomes
AlwaysRegulated outputsFinancial advice, legal opinions, medical assessmentsRegulatory obligation
SometimesAI-assisted draftingAgent drafts first version, human rewrites substantiallyDepends on how much the human changed
SometimesAI-supported researchAgent gathers sources, human analyses and concludesDepends on sector and client expectations
RarelyInternal efficiency toolsScheduling, email triage, internal summarisationNo client-facing output
RarelyAdministrative automationTime tracking, invoicing, project managementOperational, not deliverable-related

Sector-Specific Considerations

Legal services: Most bar associations and regulatory bodies expect disclosure when AI contributes to legal research, document drafting, or case analysis. The risk of sanctions for undisclosed AI filings is well-documented.

Financial advisory: The FCA and SEC expect transparency about AI involvement in financial recommendations. Client-facing AI disclosure is becoming standard in investment management.

Management consulting: Less regulated, but client contracts increasingly include AI usage clauses. Large consultancies are building AI disclosure into engagement letters.

Marketing agencies: Generally lower disclosure requirements, but clients paying for creative work may object if they discover agents produced the content without disclosure.

Recruitment: AI screening of candidates triggers disclosure requirements under the EU AI Act and several US state laws. Candidates themselves have a right to know.

How Should You Frame AI Disclosure Positively?

Disclosure does not have to be defensive. The firms doing this well are positioning AI as a quality multiplier, not a cost-cutting shortcut.

Frame AI as Expanding Scope, Not Replacing People

What works: “We use AI research agents to review 200 sources for every matter — far more than manual review could cover. Every finding is validated by a senior analyst.”

What does not work: “We use AI to save time on research.”

The first positions AI as increasing quality. The second positions it as reducing effort. Clients care about quality.

Emphasise Human Oversight

Every disclosure should mention human review. Clients need to hear that a qualified professional checked the work. This is not just good communication — it is often a regulatory requirement.

Template language: “This deliverable was prepared using AI-assisted research and drafting tools, with full review and sign-off by [senior professional name]. Our AI agents are governed by [firm’s AI governance framework], and all outputs undergo quality assurance before delivery.”

Disclose in the Engagement Letter, Not After Delivery

The worst time to tell a client about AI involvement is after they have received the work. Build disclosure into your engagement letter or statement of work. This sets expectations upfront and avoids surprises.

Include:

  • Which tasks may involve AI agents
  • What human oversight is applied
  • How quality is assured
  • How the client can request human-only work if they prefer

Pricing Transparency Supports Disclosure

Clients accept AI involvement more readily when they see it reflected in pricing. If AI agents reduce your costs, passing some savings to the client makes disclosure feel like a benefit.

For guidance on transparent AI billing, see our transparent AI billing framework.

What Happens When Firms Get Caught Not Disclosing?

The risks of non-disclosure are real and growing.

Regulatory Sanctions

Legal professionals have been sanctioned for filing AI-generated court documents without disclosure. In several documented cases, AI-generated legal briefs contained fabricated case citations. The lack of disclosure compounded the original error — firms were penalised both for the quality failure and for the concealment.

Client Trust Damage

When a client discovers undisclosed AI usage — through a data breach, a whistleblower, or a quality failure — the trust damage goes beyond the specific engagement. It calls into question every prior deliverable. Was that strategy report AI-generated too? Was that due diligence done by a person or a bot?

Rebuilding trust after undisclosed AI usage is significantly harder than disclosing proactively.

Contractual Liability

Many client agreements now explicitly address AI usage. Some prohibit it without prior consent. Using AI agents in violation of contract terms exposes firms to breach-of-contract claims, even if the work quality was perfectly acceptable.

Insurance Gaps

Professional indemnity insurance policies are being updated to address AI usage. Some policies exclude or limit coverage for AI-generated work that was not disclosed to the client. A firm facing a negligence claim for AI-generated work may find its insurance does not cover the claim if disclosure was not made.

The Transparency Paradox

Here is what the data shows: firms that disclose AI usage proactively often gain competitive advantage. Clients see transparency as a signal of sophistication and good governance. Firms that hide AI usage and get caught lose disproportionately more than they would have lost by disclosing.

For details on how billing transparency builds client confidence, see our guide on charging clients for AI agent time.

How Should You Build a Disclosure Policy?

A practical AI disclosure policy has five components.

1. Classify Your AI Use Cases

List every way your firm uses AI agents. Categorise each using the always/sometimes/never matrix above. Be honest — include the informal uses that individuals have adopted without official approval.

2. Define “AI-Generated” vs “AI-Assisted”

Set a clear threshold. One approach: if more than 50% of the deliverable’s substantive content came from an AI agent without significant human modification, it is AI-generated. Below that threshold, it is AI-assisted.

Document this definition. Train your team on it. Apply it consistently.

3. Create Standard Disclosure Language

Draft template language for each disclosure level. Include versions for engagement letters, deliverable cover pages, and verbal briefings. Make it easy for professionals to disclose — if disclosure requires a 30-minute conversation with the client, people will skip it.

4. Maintain Audit Trails

Disclosure is only credible if you can prove what happened. Log every AI agent action: what task it performed, what inputs it received, what outputs it produced, and who reviewed the result.

Without audit trails, disclosure is an assertion. With them, it is evidence.

5. Review Quarterly

AI usage evolves fast. New agents get adopted. New use cases emerge. Review your disclosure policy quarterly to ensure it covers current practice. Update your client agreements as needed.

Frequently Asked Questions

Do firms legally have to disclose AI usage to clients?

In many cases, yes. The EU AI Act requires disclosure for high-risk AI systems affecting decisions in legal, financial, employment, and other regulated sectors. UK and US regulators have issued sector-specific guidance. Even where not legally mandated, professional body rules and client contracts may require it.

What does the EU AI Act require for AI disclosure?

The EU AI Act mandates that users of high-risk AI systems inform affected individuals about the AI’s involvement. Firms must document AI system capabilities and limitations, ensure human oversight of high-risk decisions, and maintain auditable records. Penalties for non-compliance reach €35 million or 7% of global turnover.

How should you tell clients that AI did the work?

Frame AI as a quality enhancement. Lead with what it adds — broader research, faster analysis, greater coverage — rather than what it saves. Emphasise human oversight. Include disclosure in engagement letters, not as an afterthought. Template: “This work was prepared using AI-assisted tools, with full review by [senior professional].”

What tasks always require AI disclosure?

Three categories always require disclosure: AI-generated deliverables sent to clients, AI-made decisions that affect outcomes (screening, scoring, flagging), and outputs in regulated domains (legal, financial, medical). Internal efficiency tools and administrative automation rarely require client disclosure.

What happens if a firm does not disclose AI usage?

Risks include regulatory sanctions, client trust damage, breach-of-contract claims, and insurance coverage gaps. Legal professionals have been sanctioned for undisclosed AI-generated court filings. Clients who discover undisclosed AI usage often question all prior work from the firm.

How does AI disclosure affect competitive positioning?

Firms that disclose proactively often gain competitive advantage. Transparency signals sophistication and good governance. Clients increasingly prefer firms with clear AI policies over firms that avoid the topic. The firms that hide AI usage and get caught lose disproportionately more than early disclosers.


Keito logs every AI agent action with full audit trails, making client disclosure straightforward and verifiable. Make AI disclosure simple.

Know exactly what your AI agents cost

Real-time cost tracking, client billing, and profitability analysis.