Skip to main content

TCREI Prompting Methodology

Master Google's framework for effective AI prompting

E
Written by Eden Noelle
Updated over 4 months ago

What is TCREI?

TCREI is a prompting methodology developed by Google that provides a structured framework for crafting effective AI prompts. The acronym stands for: Task, Context, Reference, Evaluate, Iterate.

This methodology ensures you get high-quality, accurate outputs from AI by providing complete information and systematically refining results.

The Five Components of TCREI

T - Task: Define What You Want

Purpose: Clearly specify what you want the AI to do

Elements of a Good Task Definition:

  • Action Verb: Write, analyze, create, compare, summarize, extract

  • Output Type: Section, table, list, diagram, summary

  • Scope: Length, detail level, format

  • Audience: Who will read this?

  • Purpose: Why is this needed?

Poor Task: "Write something about our approach"

Good Task: "Write a 1,200-word methodology section explaining our phased implementation approach for a municipal government tender. The section should be formal, professional, and demonstrate our understanding of public sector requirements."

C - Context: Provide Background Information

Purpose: Give AI the situational awareness needed to generate appropriate content

Types of Context to Provide:

  • Project Context: What is this tender about? Who is the client?

  • Organizational Context: What are your company's strengths and differentiators?

  • Competitive Context: What is your competitive position?

  • Strategic Context: What is your win strategy?

  • Constraints: Word limits, format requirements, mandatory content

Example Context:
"This tender is for IT infrastructure services for a municipality with 50,000 residents. The client has had problems with their current provider's response times. Our win strategy is to emphasize our 4-hour response SLA and local presence. We have 15 years of experience in municipal IT but this would be our first contract in this specific region. The tender requires a formal, structured response with specific headings."

R - Reference: Point to Source Materials

Purpose: Direct AI to specific documents and information sources to ensure accuracy

How to Provide References:

  • Attach Documents: Upload relevant files before sending prompt

  • Cite Specific Sections: "See requirements on pages 12-15 of the tender document"

  • Reference Project Files: "Use our company profile for team information"

  • Enable Context Buttons: Activate "Project" context for access to all project files

  • Specify What to Use: "Base the timeline on our standard implementation framework"

Example Reference:
"Please base your response on: (1) Tender requirements document pages 8-12, (2) Our company profile for team and experience details, (3) Our standard SLA template for service level commitments, (4) The client's current challenges described in the background section. Do not invent any facts - only use information from these sources."

E - Evaluate: Assess the Output

Purpose: Systematically review AI output against your requirements and sources

Evaluation Checklist:

Completeness:

  • Are all required elements included?

  • Are all tender requirements addressed?

  • Is the word count appropriate?

  • Are all sections present?

Accuracy:

  • Are all facts correct?

  • Are claims supported by evidence?

  • Are references accurate?

  • Are dates and numbers correct?

Relevance:

  • Does content address the actual requirements?

  • Is it appropriate for the audience?

  • Does it support the win strategy?

  • Is it competitive and compelling?

Quality:

  • Is writing clear and professional?

  • Is structure logical?

  • Are examples specific and relevant?

  • Is tone appropriate?

Compliance:

  • Does it meet format requirements?

  • Are mandatory elements included?

  • Does it follow specified structure?

  • Are word limits respected?

How to Evaluate in TenderB:

  1. Request AI Self-Evaluation: "Please evaluate the section you just created against the tender requirements and identify any gaps or issues."

  2. Review AI Evaluation Report: The AI provides a structured assessment

  3. Perform Human Evaluation: Review the content yourself using the checklist above

  4. Get Peer Review: Have a colleague review for accuracy and quality

I - Iterate: Refine and Improve

Purpose: Systematically improve the output based on evaluation findings

Iteration Process:

Step 1: Identify Specific Issues
Based on evaluation, list specific problems:

  • "Timeline section is too vague"

  • "Missing requirement about monthly reporting"

  • "Claim about 24/7 support is incorrect"

  • "Needs more specific examples"

Step 2: Request Targeted Improvements
Provide specific, actionable feedback:

"Please revise the methodology section to: (1) Make the timeline more specific with exact week numbers and milestones, (2) Add a section on monthly reporting requirements as specified on page 15 of the tender, (3) Change '24/7 support' to 'business hours support with emergency on-call', (4) Replace generic examples with specific case studies from our municipal projects in the company profile."

Step 3: Re-Evaluate
After revisions, evaluate again using the same checklist.

Step 4: Repeat Until Satisfied
Continue the Evaluate-Iterate cycle until output meets all requirements.

Typical Iteration Sequence:

  • Iteration 1: Generate initial content

  • Iteration 2: Fix factual errors and add missing requirements

  • Iteration 3: Improve specificity and add examples

  • Iteration 4: Polish tone, style, and formatting

  • Iteration 5: Final compliance check

Complete TCREI Example

Task:
"Write a 1,500-word Project Management Methodology section for a tender response. The section should explain our approach to managing IT infrastructure projects, demonstrate our understanding of public sector requirements, and address all evaluation criteria on page 18 of the tender document."

Context:
"This is a tender for IT infrastructure modernization for a regional government agency with 500 employees. They've had issues with previous projects running over budget and missing deadlines. Our win strategy is to emphasize our structured project management approach, transparent communication, and track record of on-time, on-budget delivery in the public sector. We use Prince2 methodology. The evaluators are IT managers and procurement professionals."

Reference:
"Please base your response on: (1) Tender evaluation criteria on page 18, (2) Our Prince2 project management framework document, (3) Case studies from our company profile showing public sector projects, (4) The client's stated concerns about budget and timeline control in the tender background section. Ensure all claims are supported by these references."

Evaluate (after AI generates content):
"Please evaluate the Project Management Methodology section against: (1) All evaluation criteria on page 18 are addressed, (2) Prince2 methodology is accurately described, (3) All case study references are accurate, (4) Word count is between 1,400-1,600 words, (5) Budget and timeline control measures are prominent. Provide an evaluation report."

Iterate (based on evaluation findings):
"Please revise to: (1) Add more detail about budget tracking and reporting (evaluation criterion 3), (2) Include a specific example of how we recovered a project that was trending over budget, (3) Add a visual diagram showing our project governance structure, (4) Strengthen the section on risk management with specific techniques we use."

Testing and Refining Prompts

Using the 3-Dot Menu to Edit Agents:

  1. Test your prompt in a regular chat conversation

  2. Review the output quality

  3. If the prompt works well, click the 3-dot menu next to the message

  4. Select "Edit agent" or "Save as agent"

  5. Refine the prompt based on what worked and what didn't

  6. Save the improved prompt for reuse

  7. Test again with different inputs

  8. Continue refining until the prompt consistently produces quality output

Building a Prompt Library:

  • Save successful TCREI prompts for common tasks

  • Document what works for different tender types

  • Share effective prompts with your team

  • Continuously improve based on results

Benefits of TCREI

  • Consistency: Produces reliable, predictable results

  • Accuracy: Reduces hallucinations through clear references

  • Efficiency: Fewer iterations needed when prompt is well-structured

  • Quality: Systematic evaluation ensures high standards

  • Reusability: Good prompts can be saved and reused

  • Team Alignment: Everyone uses the same effective approach

Did this answer your question?