ContxtIQ · Startup · 2025–Present

ContxtIQ A live copilot for screening calls.

A desktop app that sits beside recruiters during live screens — grading responses, suggesting follow-ups, and answering questions in-context as the conversation unfolds.

Role

Chief Experience Officer · Full experience lead

Team

Founding team — PD, Eng, Sales

Timeline

2025–Present

Platform

macOS desktop · Live AI assist

From a broad idea to a sharp wedge

ContxtIQ started as a question — what would AI support look like if it actually lived inside a live scenario, not after it? The premise was simple: most AI tools summarize after the fact. We wanted to be present during the moment, where decisions are actually made.

That framing was too broad to ship, so we narrowed: script-based roles where there's a known structure to guide, grade, and extend. Recruiting became our first industry poke — the team had felt the pain firsthand. I'd sat through final-round loops with designers who clearly didn't have the skills they'd claimed; the same story played out with engineers and sales hires. The signal from a screening call wasn't surviving into the interview.

The screening signal gap

Screens exist to catch misalignment before the loop. But recruiters are doing three jobs at once — listening, taking notes, and navigating a script — and the judgment they produce is mostly vibes by the end of the day.

  • Inconsistent grading Two recruiters hear the same answer and score it differently. Rubrics exist on paper, not in the moment.
  • Missed follow-ups The right probing question occurs to you 20 seconds too late — or the day after. Depth gets sacrificed to keep pace with the script.
  • Context stays in the recruiter's head Hiring managers downstream inherit a short summary instead of evidence. Weak signal surfaces as a surprise in round 3.
Landing screen — start a new session and choose a script template

A copilot that stays in the call with you

ContxtIQ is a desktop app that runs alongside the screening call. The recruiter picks a script template, starts a session, and works through questions one at a time. Behind the scenes, live transcription feeds a reasoning loop that grades answers, suggests follow-ups, and stays open for ad-hoc questions — all scoped to the candidate in front of you.

01

Live response grading

Each answer scored against the script rubric in real time. Reasoning shows in the right rail so the recruiter can see why, not just the number.

02

Generated follow-ups

After marking a question complete, ContxtIQ proposes the probing question the recruiter didn't have time to think up — grounded in what was just said.

03

In-context assistant

"Ask AI assistant" is open throughout — scoped to the candidate, the script, and the session so every answer arrives with the right context already loaded.

Start a session, work the script, close the loop

The whole product bends around the screening hour. Open the app, pick a template, run the call — reasoning and follow-ups assemble themselves as you go.

Landing screen — start a new session and choose a script template
01 · Landing — open app and start a new session
Session details — candidate context and live session setup
02 · Session details — set up context for the call
Script view — run through questions with live guidance
03 · Script — work through prompts in real time
Follow-up questions — generated probes and reasoning in right rail
04 · Follow-ups — AI-generated probes and rationale

Prototype to production, in one hand-off loop

I ran this end-to-end as CXO — not just as a design artifact but all the way into the production repo. The build stayed tight because each stage handed the living prototype forward instead of writing it down on paper.

Step 01 · CXO

Figma Make prototype

I built the first prototype in Figma Make to prove the session → question → grading loop felt right before anyone committed code.

Step 02 · PD

First working stab

Product Development took the prototype and stood up a real desktop application — the first stab at the app that could actually run a session.

Step 03 · CXO

Claude Code on the real repo

Once Claude Code landed, I started building directly against the production repo — pushing PRs to refine interactions, copy, and the grading UI myself instead of handing off specs.

Step 04 · Team

Live design partner

We moved from internal builds to a design partner running real screens. Sessions are being logged live — the loop we sketched in Figma Make is now producing evidence.

30+

Live sessions logged

2

Recruiters live on product

1

Design-partner company

0→1

End-to-end experience lead

Why it matters

The ceiling for hiring quality isn't the rubric — it's the recruiter's bandwidth in the moment. ContxtIQ gives that bandwidth back.

Sharpen the wedge, then widen it

Near-term: deepen the recruiting experience with the design partner, tighten the grading reasoning, and turn session artifacts into something a hiring manager can actually pick up. Further out: the same live-copilot pattern extends to any script-based role — sales discovery, support triage, clinical intake — anywhere there's a structured conversation and a signal worth preserving.