Metaview’s Model Context Protocol (MCP) server lets AI assistants like Claude connect directly to your interview data — transcripts, AI notes, structured fields, scorecards, and more.The Metaview MCP is a standardised integration layer that gives AI tools secure, real-time access to your Metaview workspace. Once connected, you can search conversations, retrieve transcripts and notes, and read structured data — all through your AI tool of choice. Please see here for integration details.All of this happens seamlessly, without manual exports or copy-pasting. You define the workflow; the MCP gets you the data you need so you can make faster, better-informed decisions.
Analyse the interview patterns of your best hires versus those who failed probation. By pulling transcripts across cohorts, you can surface which questions, topics, and interviewer behaviours correlate with long-term success — and which don’t.
What you can build: A recurring report that compares interview data against 90-day performance outcomes, helping calibrate your hiring rubric over time.
Candidate Reviews
Evaluate candidates consistently against your hiring rubrics, ideal candidate profiles, and role criteria. Pull structured insights from every conversation to assess each candidate against your own standards — not generic templates.
What you can build: A post-interview summary that shows you how well each conversation covered your rubric dimensions, so you can spot any gaps in evidence before the debrief.
Interview Process Optimisation
Use your accumulated interview data to build, refine, and standardise your interview structures. You can analyse which questions generate the most useful signal, identify stages with redundancy, and spot where candidates consistently drop off or disengage.
What you can build: A quarterly process review that benchmarks question effectiveness and recommends changes to your interview guide based on real conversation data.
Criteria Coverage
Ensure every role criterion is measured rigorously and consistently across all interviewers and panels. Use the MCP to review your conversations and identify where critical criteria are under-covered or missing entirely.
What you can build: A real-time coverage tracker that alerts hiring managers when some rubric dimensions may need more coverage before moving to offer.
Interviewer Performance & Coaching
Surface patterns in interviewer consistency, competency coverage, and communication style to inform personalised coaching. Inconsistencies can help point to areas that may benefit from further calibration. This is one of the highest-value use cases and is detailed further in the section below.
What you can build: A daily automated coaching pack delivered to each interviewer via Slack, grounded in their actual conversations from that day.
Identifying Potential Bias
Surface patterns across interviewers that may be worth investigating further. By analysing language and questioning across your full transcript corpus, you can surface inconsistencies that are difficult to spot manually — giving your DEI or talent leadership team a starting point for deeper review.
What you can build: A monthly pattern review that flags areas of inconsistency across your interview data for your team to investigate.
Market Intelligence
Extract compensation expectations, working preferences, AI competency benchmarks, and hiring metrics directly from what candidates are actually saying in interviews. This turns your Metaview data into a real-time market intelligence source.
What you can build: A living compensation and preferences dashboard, updated automatically as new conversations are added to Metaview.
Recruiter and Candidate Q&A Analysis
Analyse what your recruiters are asking, how candidates are answering, and surface recurring themes across the funnel. This is valuable for both script optimisation and candidate experience improvements.
What you can build: A weekly digest for recruiting leaders showing the top questions candidates are asking, common objections, and how recruiters are responding to them.
The workflows in this guide are examples of what’s possible. Every organisation is different, and what works for one team may not be right for another. The goal is to empower your recruiting team to make better-informed decisions, not to replace the human judgement that sits at the heart of great hiring.
The MCP helps you surface and organise information; what you do with it is always your call.
Before implementing any AI-powered workflow on your interview data, we recommend reviewing our Best Practices when working with AI guide and checking with your legal, compliance, and data protection teams to ensure alignment with your internal policies and applicable regulations.
Example Workflow: Building a Personal Interviewing Coach
This section walks through how to set up an automated interview coaching workflow using Claude, the Metaview MCP, Google Calendar, and Slack. Once configured, it runs daily without any manual input.
The workflow follows five steps, triggered automatically each day:
Pull calendar data — Claude checks the interviewer’s Google Calendar and identifies conversations that were likely interviews.
Fetch Metaview transcripts — For each identified interview, Claude retrieves the full conversation, AI notes, and structured fields from Metaview.
Analyse against the rubric — Claude evaluates the interviewer’s performance against your hiring rubric and interview guide.
Generate a coaching pack — A personalised coaching report is produced, covering strengths, gaps, missed opportunities, and one concrete technique to practice next.
Deliver via Slack — The coaching pack is sent as a direct message to the interviewer on Slack.
Below is a complete system prompt you can paste into a Claude Project to set up this workflow.
See example prompt here
Step 1 — Pull today's calendarUse the Google Calendar tool to fetch all events for today. Identify anything that was likely an interview or candidate conversation. Cast a wide net: include events with people's names, role titles, words like "chat", "call", "loop", "screen", "debrief", "intro", or anything with an external attendee. If uncertain, include it.Step 2 — Find the conversations in MetaviewFor each likely interview event, search Metaview conversations using the candidate's name, the role, or any other context from the calendar event. Pull the full conversation data — transcript, AI-generated notes, and any structured fields or scorecard data available.If Metaview has no record of a conversation that was on the calendar, flag it clearly — I may not have recorded it, or it may not have happened.Step 3 — Analyse my interviewingFor each conversation you find, analyse my performance as the interviewer. Evaluate against two lenses: general interviewing craft, and how well I followed the approach laid out in our interview guide and hiring rubric.Alignment with our interview guide- Did I follow the recommended structure, sequencing, and question types from the guide?- Did I cover the competencies and areas the guide says this interview stage should assess?- Where did I deviate from the guide, and was the deviation justified by the flow of the conversation or a missed opportunity?Alignment with our hiring rubric- Which rubric dimensions have strong coverage from the conversation, and which have little or none?- Did I ask questions that map clearly to rubric criteria, or did I spend time on areas the rubric doesn't prioritise?- Are there rubric dimensions where coverage was thin because I didn't probe deeply enough?Question quality- Did I ask open-ended questions that gave the candidate room to demonstrate depth?- Did I fall into leading questions, yes/no questions, or questions that telegraphed the "right" answer?- Did I follow up effectively when a candidate gave a surface-level response, or did I move on too quickly?Listening and adaptiveness- Did I pick up on signals the candidate dropped and explore them?- Were there moments where the candidate said something interesting or concerning that I didn't follow up on?- Did I stick rigidly to a script when the conversation warranted going off-piste?Structure and pacing- Did the conversation have a clear arc — opening, core exploration, candidate questions, close?- Did I spend too long on any one area at the expense of others?- Did the candidate get enough time to ask their own questions?Candidate experience- Did I set context at the start so the candidate knew what to expect?- Was my tone warm and conversational, or did it feel transactional or rushed?- Did I sell the opportunity appropriately given where the candidate is in the process?Rubric coverage- For each dimension in the hiring rubric, which ones came up clearly in the conversation, and which weren't covered?- Where are the gaps? Which rubric dimensions am I consistently failing to cover?Step 4 — Build a coaching packFor each interview, produce a short coaching note with these sections:[Candidate Name] — [Role] — [Time]Overall read: One sentence on how the conversation went and which rubric dimensions were well covered versus missing.Rubric coverage: A quick summary of which rubric dimensions I covered well and which have gaps. Don't list every dimension — focus on where I was strong and where I fell short.What I did well: 1-2 specific things from the conversation, with examples. Be concrete — quote or paraphrase moments from the transcript. Where possible, tie these back to techniques from the interview guide.Where I can improve: 1-3 specific, actionable coaching points. For each one, explain what I did, why it matters relative to the guide or rubric, and what I should do differently next time. Reference specific moments in the conversation.Missed opportunities: Anything the candidate said that I could have explored further, especially where it would have added more coverage to a rubric dimension that wasn't well covered.One thing to try next time: A single, concrete technique or habit to practice in my next interview — ideally drawn from the interview guide itself.Step 5 — Spot patternsAfter reviewing all of today's interviews, look across them for recurring themes in my interviewing style. Am I consistently strong or weak in any of the dimensions above? Are there rubric dimensions I routinely under-cover? Are there parts of the interview guide I consistently follow well or consistently ignore?If you've coached me before in this project, compare against previous sessions and note whether I'm improving on things we've flagged before.End with a short "patterns" section that calls out 1-2 cross-cutting observations.Step 6 — Send me a direct message on Slack giving me my feedback.GuidelinesThe attached interview guide and hiring rubric are your source of truth. All coaching should reference these documents specifically. If I did something that contradicts the guide, call it out. If I followed it well, acknowledge it.Be direct and specific. I want coaching, not praise. If something went well, say so briefly. Spend more time on what I can improve.Always ground feedback in specific moments from the transcript. Vague observations like "you could ask better follow-up questions" are not useful. Tell me exactly where and how.Keep each coaching note to under 300 words. The patterns section can be an additional 100 words.If a conversation was very short or clearly not a real interview (e.g. a reschedule, a quick sync), skip it and note why.Do not ask me clarifying questions before starting — make reasonable inferences and go. Flag any uncertainty inline.If this is not the first time you've coached me in this project, reference previous feedback where relevant. I want to know if I'm getting better.When you're done, end with a one-line summary: how many interviews you reviewed, my biggest strength today, and the single most important thing to work on next.
The MCP is powerful, but the quality of your output depends heavily on the quality of your prompt. Here are the most common pitfalls — and how to avoid them.
Vague prompts produce vague (or wrong) results
The AI can only work with what you give it. If your request is ambiguous, it will make inferences — and those inferences may be wrong.
Fix: Specifying the candidate name, role, date range, and conversation type dramatically improves accuracy.
❌ Avoid
✅ Better
”Show me last week’s conversations from the team"
"Show me conversations from the last 7 days. Identify each candidate by their email address. Only include conversations that have a transcript. Focus on conversations tagged as ‘Job Interview’."
"Find questions that come up in interviews for candidates we hired"
"Review conversations with the following candidates: [insert email addresses of hired candidates]. For each conversation, extract the questions the interviewer asked. Then rank the questions by how frequently they appear across all these conversations.”
Candidate “names” can appear as email addresses or phone numbers
When Metaview doesn’t have a proper name on record for a participant — which happens often for phone calls or conversations not scheduled through your ATS — it falls back to displaying their email address or even phone number as the display name.This happens because for calls booked outside your ATS (direct dials, ad-hoc video calls), only an email or phone may be available.
Fix: Ask specifically for the candidate’s email address as the primary identifier. This gives you the most consistent reference across all conversation types.
❌ Avoid
✅ Better
”List candidates interviewed this week with their names"
"List candidates interviewed this week. Use email address as the primary identifier and include their name where available.”
Pulling too many transcripts at once will burn through your tokens
Fetching full transcripts is expensive in terms of context. Asking Claude to “analyse all interviews from the past year” across a large dataset will hit limits, produce incomplete results, or run up unexpected costs.
Fix: Match your approach to the scale of your query — fetching full transcripts works well for a handful of conversations, but becomes inefficient at volume.
The right approach depends on scale:
1–5 conversations: Fetching full transcripts directly works well.
5–20 conversations: Use AI-generated summaries instead of full transcripts.
20+ conversations: Use AI Columns to extract structured data fields from each conversation, then aggregate — this is dramatically more efficient.
❌ Avoid
✅ Better
”Summarise all interviews from the past 6 months"
"Create an AI Column that extracts [specific signal] from each interview from the past 6 months”
Results may include unprocessed or ‘ghost’ sessions
Not every session in Metaview has a transcript. Sometimes the bot joins a call but no transcript is generated — for example, if the call was too short, ended before enough audio was captured, or hit a processing failure. These sessions can appear in search results and add noise to your analysis.
Fix: Explicitly filter to conversations that have a transcript. Include this instruction in your prompt: “Only include conversations that have a transcript. Skip anything that was not fully processed.”
Unclassified calls may be labelled as ‘Job Interview’
If a conversation’s type hasn’t been set in Metaview, the MCP defaults it to “Job Interview”. This means non-interview calls — portfolio meetings, exploratory chats, internal syncs — can be mislabelled and skew your results, especially if your team records a wide variety of conversation types.
Fix: When filtering by call type, be aware that unclassified calls will surface as “Job Interview”. If your workspace mixes conversation types, ask Claude to group results by call type and flag anything where the classification may be unreliable.