Skip to content

Interview Feedback Aggregator

Example prompt: "After each interview round, collect the feedback scores and notes from the panel's Google Sheet. Summarise the consensus, flag any big disagreements between interviewers, and email the hiring manager a recommendation with the key points."

How to automate interview feedback collection with GloriaMundo

The Problem

After an interview panel meets a candidate, each interviewer fills in their scores and notes — sometimes in a shared spreadsheet, sometimes in an email, sometimes not at all until someone chases them. The hiring manager then has to read through multiple pieces of unstructured feedback, mentally reconcile differing opinions, and make a decision. When panellists disagree sharply, it is hard to spot that from a quick scan of a spreadsheet. The process is slow, inconsistent, and occasionally biased — a confident interviewer's opinion can dominate simply because they wrote more.

How GloriaMundo Solves It

We build a workflow that runs after interview feedback is due. An integration step reads the feedback spreadsheet for the candidate, pulling each panellist's scores and written notes. An LLM step analyses the feedback as a whole: it summarises the panel's overall impression, highlights areas of agreement and disagreement, and flags any scoring outliers — for instance, one interviewer rating a candidate 2/5 whilst everyone else rates them 4/5. A code step calculates the average scores and statistical spread across evaluation criteria. The workflow then composes a structured summary email with a clear recommendation (strong hire, hire, borderline, or pass) and sends it to the hiring manager via Gmail. If there are significant scoring inconsistencies, a conditional step also sends a Slack message suggesting the panel discuss the candidate before a decision is made.

Example Workflow Steps

  1. Trigger (manual or scheduled): Fires after the feedback deadline for a candidate, or run manually with the candidate's name.
  2. Step 1 (integration): Read all interviewer feedback rows for the candidate from the Interview Feedback Google Sheet — scores, notes, and interviewer names.
  3. Step 2 (code): Calculate average scores per criterion, overall average, and standard deviation to identify outliers.
  4. Step 3 (LLM): Analyse the qualitative feedback notes. Summarise the panel's consensus view, highlight key strengths and concerns, and flag any contradictions or potential bias signals.
  5. Step 4 (LLM): Compose a structured recommendation email with: overall score summary, qualitative highlights, areas of disagreement, and a suggested decision (strong hire / hire / borderline / pass).
  6. Step 5 (integration): Email the recommendation summary to the hiring manager via Gmail.
  7. Step 6 (conditional): If the standard deviation on any criterion exceeds a threshold, send a Slack message to the interview panel suggesting a calibration discussion.

Integrations Used

  • Google Sheets — stores structured interview feedback from each panellist
  • Gmail — delivers the recommendation summary to the hiring manager
  • Slack — alerts the panel when scoring inconsistencies need discussion

Who This Is For

Hiring managers, talent acquisition leads, and HR teams running structured interview processes with 3+ panellists per candidate, who want faster, more consistent hiring decisions without manually aggregating feedback.

Time & Cost Saved

Manually reading through 3-5 sets of interview notes, reconciling scores, and writing a summary typically takes 30-45 minutes per candidate. For a hiring round with 8-10 candidates, that is 4-7 hours of work. This workflow produces a structured, unbiased summary in minutes. It uses integration, code, LLM, and conditional steps, costing a few credits per candidate.