The AI Sales Agent MVP used real call data to solve key sales problems and improve results.
We had access to over 100 million minutes of recorded sales calls from thousands of representatives. I worked closely with the data team to analyze what was actually happening in these conversations: where reps got stuck, where prospects disengaged, and what triggered common objections. Rather than relying solely on dashboard metrics, we focused on understanding the actual conversation dynamics.
We used Whisper for transcription and OpenAI models to identify initial patterns in the conversations. Claude Opus, while expensive, proved invaluable for this analysis. Our goal wasn't perfection in the first iteration, but rather gathering sufficient signal to build a working prototype.
This analysis provided the foundation for our training approach. We translated these insights into specific tasks and developed our initial agent.
The first version accomplished its core objectives: answering calls, following structured scripts, and managing basic objections. Rather than attempting to solve every possible use case, we concentrated on high-volume, lower-risk scenarios like initial lead capture and first-touch interactions.
Following a week of research, I collaborated with engineering to define clear requirements, removed features that didn't support our primary use case, and launched the MVP for internal testing within 3 weeks total.
We ran structured surveys across five verticals, sent demo calls, and watched who leaned in. Hard Money Lending stood out immediately. They had compliance needs, heavy phone usage, and lean teams that couldn't scale reps.
I helped design the survey, ran working sessions with sales and CS, and pushed for a vertical-first go-to-market instead of a generic pitch.
Hard Money Lending showed 3x higher engagement than any other vertical in our structured surveys.
The initial calls revealed significant challenges: awkward pauses, unnatural phrasing, and missed conversational cues. The team put in long hours to address these issues, driven by a shared commitment to shipping a working product. We monitored every call, identified failure points, and implemented fixes quickly.
I ran the feedback loop directly. No middle layers. PM to engineer to call review to fix.
Weekly iteration cycles based on real call analysis drove measurable improvements across all key metrics.
Our initial approach attempted to serve five different verticals simultaneously, which proved ineffective. The scripts became generic, the model performance degraded, and the product lacked clear focus.
I advocated for a strategic pivot and worked to align our go-to-market team and leadership on this direction. We decided to concentrate exclusively on lending, redesigned our conversation flows based on actual lender calls, and trained the model with more targeted data. The improvement was immediately apparent.
We transformed repetitive, error-prone first-touch calls into consistent, reliable interactions. By analyzing over 100 million minutes of call data, we identified a focused use case and built features that directly improved key performance metrics.
We shipped the MVP in 3 weeks and established a weekly iteration cycle based on actual call performance. Call completion improved from 60% to 85%, objection handling success increased from 10% to 40%, and sentiment detection accuracy improved by 3.2x. The results came from focused execution, data-driven decisions, and rapid feedback loops.
Closed alpha with select customers.
One narrow use case. Real usage. Weekly improvement. If you have volume and a clear problem, I’ll help you scope, ship in ~3 weeks, and iterate from calls.
Talk About Your Use Case