Three Prompts to Make Gemini or Claude Train You Into a Better Interviewer
Interview PrepAI ToolsPractice

Three Prompts to Make Gemini or Claude Train You Into a Better Interviewer

jjobless
2026-02-04
11 min read
Advertisement

Three ready-to-use prompts (simulate, critique, pressure-test) to train Gemini or Claude into a better interviewer with a QA checklist to avoid AI hallucinations.

Hook: You’re preparing for interviews—but traditional practice feels slow, unfocused, or out of date

Job hunting in 2026 looks different: hybrid roles, fast-moving skill stacks, and interview loops that expect polished behavioral stories plus on-the-spot problem solving. If you’re juggling resume edits, short-term gigs, and interview anxiety, you need practice that’s targeted, repeatable, and honest. AI assistants like Gemini and Claude can be your on-demand coach—if you prompt them the right way.

The upside—and the risk

Modern LLMs are phenomenal at role-play, feedback, and creating realistic interview follow-ups. Since late 2025 these models improved their guided-learning flows and coaching capabilities, and hiring teams increasingly expect crisp behavioral answers backed by measurable outcomes. But beware of AI "slop"—low-quality, made-up content that looks plausible. As Merriam-Webster put it, slop was 2025’s Word of the Year for a reason: AI can produce fluent but inaccurate responses if you don’t manage quality and verification. For more on trust, automation and why human editors still matter in AI workflows, see this opinion on trust and human editors.

"AI slop — digital content of low quality that is produced usually in quantity by means of artificial intelligence." — Merriam‑Webster, 2025

What you’ll get in this article

  • Three ready-to-use prompt templates to train Gemini or Claude to make you a better interviewer: simulate behavioral interviews, get live feedback on answers, and practice follow-up questions.
  • An iterative practice routine you can follow weekly.
  • A concrete QA checklist to catch AI hallucinations and keep feedback factual and useful.
  • Advanced strategies and 2026 trends to turn AI practice into interview offers.

Why behavioral interviews practice matters in 2026

Behavioral interviews are still the gold standard for hiring across industries. Employers use them to probe how you act in real situations—team conflict, missed goals, cross-functional wins. What’s changed in 2026:

  • Faster feedback loops: Recruiters expect clear examples that fit the role’s impact metrics.
  • Data-based storytelling: Interviewers now ask for concrete KPIs, impact ranges, and stakeholder context more often.
  • Hybrid formats: Many interviews combine behavioral questions with short practical tasks; being concise and metric-driven matters.

How to use AI for interview practice (high level)

Follow a simple training loop: simulate → answer → critique → iterate. Use the three prompts below as modular tools in that loop. Keep a running document (Google Doc or Notes) of your refined STAR stories and the AI’s suggested edits so you can rehearse without repeating mistakes.

Prompt 1 — Simulate a behavioral interview (role-play)

Goal: Create realistic, role-specific behavioral questions and simulate an interviewer with adjustable difficulty and probing style.

Template (copy & paste)

You are an expert interview coach and hiring manager with 10+ years of experience interviewing for {JOB_TITLE} roles at {INDUSTRY} companies (size: {COMPANY_SIZE}).

1) Ask me 8 behavioral interview questions tailored to {JOB_TITLE} and {SENIORITY} level. Use the STAR framework (Situation, Task, Action, Result).
2) After each question, give me a short interviewer prompt to simulate pushback or follow-up (e.g., "Tell me more about the trade-offs" or "What metrics moved?").
3) After I answer, pause and wait for me to ask for feedback.

Settings: Tone: {FORMAL|CONVERSATIONAL}; Difficulty: {easy|moderate|challenging}; Limit follow-ups to 2 per question.

Start with: "We'll run question 1 of 8—press Enter to answer." 
  

How to use it

  • Replace placeholders: {JOB_TITLE}, {INDUSTRY}, {COMPANY_SIZE}, {SENIORITY} (e.g., "Product Manager" / "SaaS" / "200-2,000" / "Senior").
  • Use low temperature (0–0.3) if you want standardized, realistic interviewer wording; use 0.6–0.8 to add creative, curveball follow-ups.
  • Record your spoken answers and paste transcripts back into the feedback prompt (Prompt 2).

Prompt 2 — Critique my answer (detailed feedback & rewrite)

Goal: Get targeted, evidence-based feedback on a single answer, plus a rewrite that improves clarity, metrics, and STAR completeness.

Template (copy & paste)

You are an expert interview coach who scores answers on the STAR method. I will paste my answer below. Do not invent facts.

Task:
1) Break my answer into Situation, Task, Action, Result sections. Label each section.
2) Score each section 1–5 and give short justification.
3) Provide 3 concrete ways to improve this answer (word-level, metric-level, structure-level).
4) Rewrite the answer to a 60–90 second spoken response that highlights impact and metrics. Use simple, interview-ready language.
5) List 3 possible follow-up questions an interviewer could ask and how I should answer each (one-sentence bullets).

Answer to critique:
"{PASTE_YOUR_ANSWER_HERE}"

Constraints: If you cannot verify a metric I mention, flag it and say "VERIFY: [claim]". If unsure, ask clarifying questions rather than guessing.
  

How to use it

  • Paste verbatim your spoken transcript. The model will segment and score it.
  • Use this prompt iteratively: apply feedback, record a new version, and repeat until scores improve.
  • Insist on the model flagging unverifiable claims—this is part of the QA checklist below.

Prompt 3 — Practice follow-ups and pressure testing

Goal: Anticipate the short, sharp, follow-up questions hiring panels use to test depth, and rehearse rapid responses under pressure.

Template (copy & paste)

You are an aggressive but fair panel interviewer. I will give you a one-sentence claim from my answer (e.g., "Improved onboarding NPS by 18 points").

1) For that claim, generate 6 follow-up questions that probe depth, causality, and impact. Order them from easiest to hardest.
2) For each question, provide the key fact the interviewer is seeking.
3) For each question, give a one-sentence model answer and a one-line tip to improve delivery.

Claim: "{ONE_SENTENCE_CLAIM}"

Settings: Response speed: concise. Tone: firm but professional.
  

How to use it

  • Use this to pressure-test claims with metrics or causal language.
  • Practice answering the hardest follow-up aloud in a timed 20–30 second window to simulate panel pressure.
  • Make sure you can provide a source or verifiable context for each claim (see QA checklist).

A sample mini-run (example)

Quick walkthrough so you can see how the loop works in practice:

  1. Run Prompt 1 for a Senior Product Manager at a 500-person SaaS company; it asks a question about resolving a cross-functional launch miss.
  2. Answer aloud with a 90-second STAR story. Transcribe and paste into Prompt 2.
  3. Prompt 2 scores the answer: Situation 4/5, Task 3/5, Action 3/5, Result 2/5; suggests adding a measurable KPI and a clearer stakeholder list; rewrites the story tighter and more metric-driven.
  4. Use Prompt 3 on the rewritten claim "reduced time-to-value by 28%"—it generates six follow-ups including "How did you measure time-to-value?" and gives model answers you can memorize.

Practical practice routine (weekly plan)

Make this a habit. Consistency beats marathon sessions.

  • Session length: 30–45 minutes, 3–4 sessions per week if you have active interviews; 1–2 sessions weekly if early-stage.
  • Warm-up (5 min): Read one STAR story from your bank and refine wording.
  • Mock block (20–25 min): Use Prompt 1 for 3 questions, answer aloud.
  • Feedback block (10–15 min): Paste one answer into Prompt 2 and implement the top 2 suggestions immediately, recording the new version.
  • Pressure test (5 min): Use Prompt 3 on your highest-impact claim.

Advanced strategies for power users

  • Prompt chaining: Use output of Prompt 2 to feed Prompt 3 and iterate automatically—this reduces human friction. Consider sprinting with a timeboxed approach for faster iteration.
  • Cross-model validation: Run the same prompts through both Gemini and Claude (and maybe a smaller model) to compare feedback. Diverging suggestions are a signal to human-review.
  • Temperature & tokens: Use temperature 0–0.2 for scoring and rewrite prompts; use 0.5–0.8 when you want creative question variants.
  • Timeboxing: Set a 60–90 second cap on spoken answers in the prompt to simulate real interview constraints.
  • Human-in-the-loop: Share AI feedback with a peer or mentor weekly to avoid bad habits and to validate improvements. See an AI playbook for examples of effective human-in-the-loop patterns.

QA Checklist to avoid hallucinations and AI slop

Before you act on AI feedback (especially if you plan to use it in an interview), run these checks. This list reflects best practices from content teams fighting AI slop and hiring panels demanding accuracy.

  1. Flag unverifiable claims: If the model restates a company fact or metric, require the model to tag anything it did not derive from your input with "VERIFY: [claim]".
  2. Ask for sources: For role-specific facts (e.g., "Product launched Q2 2024"), ask the model to cite a public source or say "I don't know." If it can’t cite, treat the claim as unverified.
  3. Cross-check with primary data: Compare any company facts with LinkedIn, the company website, or recent press releases. Don’t rely on AI as a fact database.
  4. Temperature control: Use low temperature for factual critiques; higher temperature only for creative practice questions.
  5. Human review: Every 5–10 AI iterations, ask a human mentor to listen to your refined answers to catch tone and cultural fit issues.
  6. Look for confident hallucinations: If the model uses precise-sounding but inexplicable details (e.g., "launched to 7M users on day 1"), mark them as suspicious and verify.
  7. Limit 'invented' context: When running Prompt 1, state explicitly in the system message: "Do not invent company processes or metrics. If context is needed, ask me."
  8. Save changes and diffs: Keep original answers and AI rewrites. Compare before/after to ensure you’re gaining clarity, not just fluff.

Common pitfalls—and how to avoid them

  • Pitfall: Relying on AI-generated metrics. Fix: Only present metrics you can back up; ask AI to flag unverifiable numbers.
  • Pitfall: Creating overly polished but inauthentic answers. Fix: Maintain your voice—ask the AI to keep colloquialisms you normally use, or a 2-sentence ‘natural’ version for live interviews.
  • Pitfall: Overfitting to one model’s style. Fix: Cross-validate across models and humans to maintain adaptability.
  • Pitfall: Using high-temperature models for final rewrites. Fix: Use low temperature for final versions to reduce invented detail.

Mini case study (hypothetical, practical)

Jess is a teacher transitioning to instructional design. In October 2025 she used these prompts to rehearse for a UX writing job. After three weeks of structured AI practice—three 30-minute sessions weekly—she measured improvement:

  • Time to answer concisely (90s target) dropped from 120s to 78s.
  • STAR completeness score (per Prompt 2 rubric) improved from an average of 2.7 to 4.2 out of 5.
  • She identified and removed two unverifiable claims the AI had inserted by flagging them and verifying with LinkedIn posts.

Result: Jess reported feeling calmer and carried a bank of verified, metric-backed stories into her panel interview—and received an offer two weeks later. Her success wasn’t magic: it was disciplined iteration and quality control.

  • LLM coaching features matured: Since late 2025, many assistants added guided learning flows and scoring rubrics—use those but keep human oversight.
  • Companies expect measurable impact: You’ll be asked for metrics, so prepare verifiable numbers or bounded estimates (e.g., "~20–30% improvement").
  • AI-sensitivity in hiring: Interviewers may ask your prep process—be transparent about using AI to practice and emphasize human verification.
  • Micro-certifications and guided projects: Short project-based assessments are more common—use AI to simulate the project brief and rehearse communicating trade-offs under time pressure. See related ideas in this play on portable kits and micro-events.

Quick reference: Scoring rubric you can paste into the model

Ask the model to score answers using this rubric for consistency:

  • Situation (1–5): Clarity and context; does the listener understand scope and constraints?
  • Task (1–5): Defined objective; was it ambitious yet realistic?
  • Action (1–5): Specific steps and your role; shows ownership and trade-offs.
  • Result (1–5): Metrics or qualitative outcome; learning and next steps.
  • Delivery (1–5): Conciseness, storytelling, and composure under pressure.

Final checklist before live interviews

  1. Polish 6 core STAR stories covering leadership, conflict, failure, impact, cross-functional work, and a role-specific win.
  2. Run each through Prompt 2 and Prompt 3; verify every metric.
  3. Record and listen to your spoken answers once. If any sound unnatural, revise language while keeping the facts.
  4. Save the final 60–90 second version of each story as a one-paragraph script you can scan the morning of the interview.
  5. Be ready to say how you used AI to practice—and emphasize verification (trust-builders for interviewers). For more on hiring tech and applicant tracking, see this ATS review.

Wrap-up — actionable takeaways

  • Use the three prompts as building blocks: simulate → critique → pressure-test.
  • Follow a short, repeatable routine 3–4 times a week when you’re in active recruiting.
  • Always run the QA Checklist to avoid hallucinations—ask the model to flag unverifiable claims and verify against primary sources.
  • Cross-validate feedback across models and a human reviewer before changing core story facts.

Call to action

Ready to try these prompts? Copy the three templates into your preferred AI assistant (Gemini or Claude), run one 30–minute session this week, and bookmark the QA checklist. If you want a printable checklist and ready-made document templates for your STAR stories, download our free Interview Practice Pack at jobless.cloud or join our weekly AI coaching session to get live feedback from a career coach.

Advertisement

Related Topics

#Interview Prep#AI Tools#Practice
j

jobless

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-02-07T04:08:02.693Z