Managing AI Slop: Why Due Diligence is Essential for Businesses Using Generative AI
As generative AI spreads across business functions, unchecked “AI slop” poses legal and reputational risks. Learn why due diligence is now essential.
Artificial intelligence (AI) has moved from experimentation to embedded infrastructure across corporate functions. From drafting reports and summarizing regulations to generating market analysis, policy recommendations, and research outputs, generative AI is now deeply integrated into professional and commercial workflows. Yet this acceleration has surfaced a new and increasingly material risk: “AI” slop – plausible-sounding but false, fabricated, or misleading outputs produced by generative models.
Unlike traditional data or human error, AI slop is dangerous precisely because it is confident, fluent, and difficult to detect without deliberate verification. When left unchecked, it can contaminate decision-making, undermine regulatory compliance, and expose organizations to reputational and legal consequences. Recent incidents involving incorrect AI-assisted reports submitted to governments in Australia and Canada by Deloitte have illustrated that even sophisticated, highly resourced firms are not immune.
For boards, executives, and risk leaders, the message is clear: AI governance without rigorous due diligence is no longer sufficient. As AI becomes a routine business tool, verification, accountability, and oversight must become standard operating discipline.
What is “AI slop,” and why is it so risky?
“AI slop” refers to AI-generated content that appears coherent and authoritative but contains:
- Fabricated facts, citations, or legal references
- Misattributed sources or invented quotations
- Logical inconsistencies masked by fluent language
- Outdated or contextually inappropriate information
Large language models (LLMs) do not “know” facts in the human sense. They predict text based on probabilities, patterns, and training data. When prompted to provide citations, regulatory interpretations, or research summaries, they may generate references that look correct but do not exist.
In low-risk contexts – internal brainstorming or informal drafting – this may be manageable. But in high-stakes environments such as government submissions, tax opinions, compliance reports, ESG disclosures, or workforce policy design, the consequences are far more severe.
Generative AI risks: Lessons from recent high-profile failures
The Deloitte cases in Australia and Canada serve as illustrative warnings rather than isolated anomalies.
In Australia, a government consultancy report was found to contain fabricated academic references and an invented court quotation. The report had passed internal review processes before submission, only to be challenged publicly. Deloitte ultimately refunded part of the fee, corrected the report, and disclosed AI usage.
In Canada, a publicly funded health workforce report included multiple false citations to nonexistent studies, some attributed to real academics who had never authored such work. Again, the issue appeared to stem from unverified AI-generated research references.
What makes these cases instructive is not simply the presence of errors, but where controls failed:
- AI-generated content entered final deliverables
- Source verification was insufficient or absent
- Review processes focused on narrative coherence, not factual integrity
- AI use was not adequately disclosed or governed
For businesses, the takeaway is stark: AI increases the need for due diligence.
Why traditional review processes are no longer enough
Many organizations assume that existing quality assurance (QA) frameworks can absorb AI outputs without modification. This assumption is flawed.
Traditional review processes are designed to catch:
- Calculation errors
- Inconsistencies in argumentation
- Non-compliance with known standards
They are not designed to detect fabricated reality. A citation that looks legitimate but does not exist will often pass review unless reviewers are explicitly instructed – and given time and tools – to verify sources independently.
AI slop introduces a new failure mode: content that is syntactically correct but epistemically false.
AI due diligence as an enterprise risk issue
Organizations should now treat AI misuse and AI hallucinations as part of enterprise risk management, on par with:
- Financial misstatements
- Regulatory non-compliance
- Cybersecurity breaches
- Data privacy violations
This means embedding AI governance into:
- Board oversight and audit committees
- Internal controls and compliance frameworks
- Vendor and professional liability management
- Client engagement and disclosure policies
Crucially, accountability must remain human. AI cannot be the responsible party.
AI due diligence: Five core principles for managing AI slop
- Purpose-bound AI usage
AI tools should be deployed only within clearly defined use cases. Drafting assistance, language polishing, and summarization are materially different from fact generation, legal interpretation, or original research.
Organizations must draw firm boundaries between:
- Assistive drafting
- Authoritative analysis
- Mandatory human verification
Any AI-generated output that includes facts, figures, citations, and legal or regulatory references must undergo line-by-line human verification by a qualified professional. Verification is not editorial review – it is source validation.
- Traceability and auditability
AI usage should be documented:
- Which tool was used
- For which task
- By whom
- With what verification steps
This protects both the organization and individual professionals.
- Transparency with clients and stakeholders
Failure to disclose AI use – especially in advisory or research contexts – creates trust and liability risks. Clear disclosure norms should be established contractually and operationally.
- Training and cultural alignment
Employees must understand not only how to use AI but also how it fails. AI literacy is now a compliance skill, not just a productivity one.
Industry-specific AI due diligence checklists
Professional services (Tax, consulting, governance, HR)
Risk profile: High
Why: Outputs influence regulatory compliance, financial exposure, workforce rights, and government policy.
Due diligence checklist:
- Prohibit AI from generating final tax opinions, legal interpretations, or compliance advice without senior sign-off.
- Require independent verification of all AI-generated citations, statutes, case law, and regulations.
- Maintain a clear record of where AI-assisted drafting was used vs where professional judgment was applied.
- Disclose AI use in client deliverables where relevant.
- Update professional indemnity policies to account for AI-related risks.
- Include AI misuse scenarios in internal risk and audit reviews.
- Train consultants and advisors on AI hallucination risks specific to regulatory content.
Red flag scenarios:
- AI-generated references to laws, court cases, or guidance notes
- AI-assisted benchmarking or policy comparisons without source validation
Technology companies
Risk profile: Medium to High
Why: AI outputs often feed into product documentation, compliance representations, and investor communications.
Due diligence checklist:
- Separate AI-generated documentation from authoritative technical specifications.
- Validate AI outputs used in regulatory filings, security documentation, or compliance claims.
- Establish AI output testing protocols, similar to software QA.
- Implement internal review gates for AI-generated customer-facing content.
- Ensure marketing and investor materials do not rely on unverified AI analysis.
- Document training data limitations and model constraints where disclosures are required.
- Align AI governance with data protection and cybersecurity frameworks.
Red flag scenarios:
- AI-generated claims about regulatory compliance
- AI-written technical or safety documentation
Research organizations and think tanks
Risk profile: Very High
Why: Credibility depends on accuracy, sourcing, and intellectual integrity.
Due diligence checklist:
- Ban AI from generating original citations or academic references.
- Require manual cross-checking of every source used in AI-assisted drafts.
- Clearly label AI-assisted sections in internal workflows.
- Maintain strict authorship and accountability standards.
- Train researchers to recognize hallucinated studies and fabricated data.
- Protect researcher reputations by preventing false attribution.
- Establish publication review committees for AI-assisted outputs.
Red flag scenarios:
- AI-generated literature reviews
- AI-produced comparative studies without verified datasets
AI risk management: Implications for boards and senior leadership
Boards and executive teams should be asking:
- Do we know where AI is used in our organization today?
- Are there controls distinguishing draft assistance from authoritative output?
- Who is accountable if AI-generated errors reach regulators or clients?
- Are our vendors and advisors applying equivalent due diligence standards?
Failure to ask these questions increases liability.
Conclusion: Due diligence is the price of AI credibility
AI is not inherently reckless, but it is inherently indifferent to truth. That indifference makes human governance indispensable. The recent Deloitte cases underscore a broader reality: AI failures are not technology failures – they are governance failures.
Organizations that embed verification, transparency, and accountability into AI use will not only avoid reputational damage; they will build trust with regulators, clients, and investors. Those that treat AI as a shortcut rather than a tool will eventually pay the price.
In the age of generative AI, due diligence is no longer optional – it is the cost of credibility.
About Us
India Briefing is one of five regional publications under the Asia Briefing brand. It is supported by Dezan Shira & Associates, a pan-Asia, multi-disciplinary professional services firm that assists foreign investors throughout Asia, including through offices in Delhi, Mumbai, and Bengaluru in India. Dezan Shira & Associates also maintains offices or has alliance partners assisting foreign investors in China, Hong Kong SAR, Vietnam, Indonesia, Singapore, Malaysia, Mongolia, Dubai (UAE), Japan, South Korea, Nepal, The Philippines, Sri Lanka, Thailand, Italy, Germany, Bangladesh, Australia, United States, and United Kingdom and Ireland.
For a complimentary subscription to India Briefing’s content products, please click here. For support with establishing a business in India or for assistance in analyzing and entering markets, please contact the firm at india@dezshira.com or visit our website at www.dezshira.com.
- Previous Article Merchant Shipping Act, 2025: Insights for Trade & Logistics Businesses
- Next Article




