Understanding AI Hallucinations
What Are AI Hallucinations?
AI hallucinations occur when the AI generates content that sounds plausible but is factually incorrect, unsupported by source documents, or entirely invented. This happens because AI language models predict what text should come next based on patterns, not because they "know" facts.
Example of Hallucination:
Prompt: "The sky is..."
AI might predict: "blue" (sunny day), "grey" (overcast), "cloudy" (rainy)
Reality: Only by checking actual weather can you confirm accuracy
Why Hallucinations Happen in Tender Writing
Prediction-Based: AI predicts plausible text, not factual truth
Context Gaps: AI may lack specific information about your organization
Overgeneralization: AI applies general patterns to specific situations
Confidence Without Knowledge: AI presents all outputs with equal confidence
The Plan-Do-Check-Act (PDCA) Methodology
PDCA is a continuous improvement cycle that, when applied to AI-assisted writing, dramatically reduces hallucinations and improves output quality.
PLAN: Define What You Need
Objective: Provide clear, specific instructions with proper context
Best Practices:
Specify exactly what you need (section name, word count, requirements to address)
Provide relevant source documents
Define success criteria
Identify key facts that must be included
Specify your organization's unique approach or differentiators
Example PLAN Prompt:
"I need a 1,500-word Implementation Plan section addressing tender requirements on pages 12-15 of the attached document. The plan must include: (1) phased approach with timeline, (2) quality checkpoints, (3) risk mitigation, (4) communication strategy. Our unique approach is minimal disruption through off-hours implementation. Use our company profile for facts about our team and experience."
DO: Generate Content
Objective: Let AI create initial content based on your plan
Process:
Attach relevant source documents
Enable "Project" context for access to all project files
Select appropriate sub-agents
Send your well-planned prompt
Wait for AI to generate content
Important: At this stage, accept that the output may contain errors. The next phase will catch them.
CHECK: Verify Against Sources
Objective: Systematically verify AI output for accuracy and completeness
This is the most critical step for preventing hallucinations!
Step 1: Request AI Self-Verification
After AI generates content, immediately request verification:
"Please review the Implementation Plan you just created and verify it against the tender requirements document and our company profile. Check for: (1) All tender requirements are addressed, (2) All facts about our organization are accurate, (3) No claims are made without supporting evidence, (4) No information has been invented or assumed, (5) Timeline and commitments are realistic. Provide a detailed verification report with any discrepancies found."
Why This Works:
AI can only check output once it's generated, not before. By explicitly requesting verification as a separate step, you force the AI to:
Re-read source documents
Compare its output against sources
Identify discrepancies
Flag unsupported claims
Catch invented information
Step 2: Review Verification Report
The AI provides a structured verification report showing:
✅ Requirements successfully addressed
⚠️ Areas needing clarification or additional detail
❌ Inaccuracies or unsupported claims
📋 Missing requirements
Step 3: Human Verification
Don't rely solely on AI self-checking. Perform your own verification:
Check key facts against source documents
Verify claims about your organization
Confirm technical details
Ensure commitments are realistic
Validate references and examples
ACT: Correct and Improve
Objective: Make corrections and refinements based on verification findings
Step 1: Request Corrections
Based on the verification report:
"Please update the Implementation Plan to: (1) Remove the claim about 24/7 support (we only offer business hours support), (2) Correct the team size from 15 to 12 people, (3) Add missing requirement about monthly reporting, (4) Replace the generic risk mitigation with specific strategies from our risk management framework document."
Step 2: Verify Corrections
After corrections, verify again:
"Please verify the updated Implementation Plan against sources one more time to ensure all corrections have been properly made."
Step 3: Iterate Until Perfect
Repeat the CHECK-ACT cycle until:
All requirements are addressed
All facts are verified
No unsupported claims remain
Content is accurate and complete
Practical Example: PDCA in Action
PLAN:
"Write a 500-word Executive Summary covering our understanding, approach, experience, and benefits. Use tender document requirements on page 3 and our company profile."
DO:
AI generates executive summary.
CHECK:
"Verify this executive summary against the tender requirements and company profile. Check all facts."
AI Verification Report:
✅ Understanding section addresses client needs
❌ Claims 20 years of experience (company profile shows 15 years)
⚠️ Mentions "award-winning service" without specifying which award
❌ States "100% success rate" (not supported by any document)
📋 Missing required section on environmental sustainability
ACT:
"Please correct: (1) Change to 15 years of experience, (2) Remove 'award-winning' or specify the actual award we won, (3) Remove '100% success rate' claim, (4) Add section on environmental sustainability based on our sustainability policy document."
CHECK Again:
"Verify the corrected executive summary."
Result: Accurate, verified content that won't embarrass you or get you disqualified.
Key Principles
AI cannot self-correct before generation: Content must be created first, then checked
Explicit verification requests are essential: Don't assume AI will check automatically
Multiple verification passes improve accuracy: Check, correct, check again
Human oversight is mandatory: AI verification helps but doesn't replace human judgment
Source documents are truth: Always verify against actual documents, not AI memory
Common Hallucination Types to Watch For
Invented Statistics: "We've completed 500+ projects" (check actual number)
Exaggerated Claims: "Industry-leading," "Best-in-class" (verify or remove)
Incorrect Dates: Project completion dates, company founding year
Wrong Team Sizes: Number of employees, team members
Unsupported Capabilities: Services you don't actually offer
Generic Examples: Case studies that sound good but aren't real
