Unaligned #7: How AI is changing recruiting and contact centers

Transcription for the video titled "Unaligned #7: How AI is changing recruiting and contact centers".

1970-01-01T01:02:09.000Z

Note: This transcription is split and grouped by topics and subtopics. You can navigate through the Table of Contents on the left. It's interactive. All paragraphs are timed to the original video. Click on the time (e.g., 01:53) to jump to the specific portion of the video.


Introduction

Intro (00:00)

I have some founders on with me today who are going to talk to me about AI in call centers and all sorts of enterprise kind of things. Anyways, welcome guys. I always start out with who are you? So I'm Jake. I'm a software engineer from Dayton, Ohio. From Dayton, Ohio and Adam and I met on Twitter a few months back and started working together with some of these new AI pieces. And that's how we met through DMs on Twitter, actually. And I'm now the co-founder of CTO and building most of the tech, starting to build out a team for Ready. Adam Stoker And my name's Adam. And like Jake said, we started Ready because we just found a huge gap in the market right now for how companies are training their new hires. My background is as an operator. Previously, I was working in the training space, but really going for alternative education. I was working for Gen Z students trying to find a different way to go about college. And I noticed that whether you gave them classes from Harvard or Yale, whatever it was, if it's just a static class online, they're usually going to click through about as fast as they can to get to the end. And that's what a lot of companies have found with their training methods today using LMSs. And so when Jake and I met, we were talking about how crazy it is that the way that companies hire and train people hasn't really changed since the fifties. And so that's why we founded ready to try to bring something new, especially involving all the fun, new stuff you can do with AI to the space.


In-Depth Discussion On Simulation Training

Training simulations (01:36)

It's really crazy. Yeah. Tell me a little bit about what your system is for, who would buy it and what you're learning about building, building this. Yeah. What we do is we create realistic training simulations based off of your real company data. So we hook into all of your different data sources with an AI coach that helps your existing hires perform better at their everyday job, and then using what we learned from that coach, we then transfer that back into a training sandbox where new hires can safely, without concern, work on the skills that they're going to need to do their job and get instant one-on-one tailored feedback. And that's really something that we found across every organization. And one of the weirder stats we've seen is that 82% of people believe that robots can support their career better than humans right now. That's. That that was from 2020 study that Oracle did. So it's only gotten better since then. And where we are at right now is that we're working with call centers and contact centers because they have arguably the worst problem of any of these organizations. Attrition is really high. Many of them have turnover over 100%. So in a given year, their entire workforce changes over. That means there's a huge burden on the organization to train these skills. And also for conversational skills, LLMs are in a great place today, even if they never got any better, which of course they will. But if it just stayed as good as they are today, contact centers, call centers can be revolutionized by this technology. Oh, yeah. And so that's where we started. Yeah, that's what I'm really seeing. I've seen several companies building like a healthcare system call center kind of technology where you call into a nurse's station and it talks to you. It's really mind blowing actually, you know, how good the quality is. But tell me a bunch of different companies have tried to take a shots at training the court or the workforce, right? Stryver has a whole bunch of VR headsets that they were using to train employees. What kind of training are you talking about and how does it work? What are you finding by building this new AI based training? Yeah, a lot of the the call center, at least the enterprise customer we're working with right now. They were very much on a paper method before. And so they would have literal scripts on a piece of paper that they would have new hires read through in a classroom and try to practice. So our first step with that is we've digitized those scripts and put it into a format where our software can understand where you're at in the script and have an LLM sort of modify the prompts that it does as you progress. So it can act like a virtual customer that an agent can talk to. And then after you're done, then the LLM can come back and grade how you did answering objections and handling issues on the call. how you did answering objections and handling issues on the call. So we're trying to make it almost like a real world scenario. And then also providing that one-on-one feedback that would just be impossible for one coach to do with a class of 10 or 15 people. And technically, how is this arc? What's your architecture? Are you using chat GPT?


Customizing Role-Playing for Employees (05:04)

Are you using your own models? Have you fine tuned? Tell me a little bit about the technology underneath. Sure. Sure. So it's a few pieces. We use a lot of stuff from open AI, the APIs there. GPT four is the best right now at understanding complex things, being able to grade provide feedback. It just, it doesn't hallucinate as much. It's very good. As far as response though, we want to be able to respond very quickly. So someone speaks into a microphone on a computer, that audio gets sent to our server and we want to be able to process the audio, transcribe it. We use Whisper because it's great and it's multilingual. And then we can send it off. We use 3.5 because it's very fast. And then we want to send back down an audio response. So that's one piece of just the overall structure. And it sounds very similar to an automated call. But then we also have the side of embeddings. So we want to know where you're at in the conversation. So we have a rich set of embeddings across a script to know, have you gotten to this stage of a call? And as we we don't force a student to click next or something to say, this is where I'm at in the call now. So it's a very dynamic way to say, you're already talking about this aspect of a sale or this problem in customer service. So you're probably this far in the script and we can modify the prompting like that. Yeah. How do you get the AI to be aggressive with the employee?


When persona meets training. (06:42)

Like an, like a, like somebody who would call a call center and probably be irate when they're calling in, cause something's not working and they're mad at the world. Yeah. Yeah. That's a great question. We've we have a lot of fun with that piece. So the, we have these personalities, maybe 20 or so in total, and we have these personas that we'd match with a personality. So it can be like what your job is, where you live, like your address, and we'll match all these things together and include that in a prompt. So we have chatty Kathy who will sit there and just talk on and on about her grandchildren. And then we have this cowboy who's talking about his ranch and he just he's talking about herding cows and stuff. And it's like overly exaggerated, almost like cartoon style. But the students the feedback we're getting is that they love it. And they're having a lot more fun in something that would generally be a boring job to go into a call center. And it's not very exciting. And so those personalities and just the dynamics that GPT can provide really helps to liven up the environment a lot. A lot of companies are trying to build personas, personalities, right? Elon Musk keeps bragging about his personality in Grok in his AI and how it's different than the personality that's in chat GPT. personality in Grok in his AI and how it's different than the personality that's in chat GPT. What are you learning about building personalities? Yes, you're using them to train people, but I bet you're learning what works and what doesn't work well with personalities. I bet other companies might be interested in your learning there. Yeah. The core thing we've learned is open AI does great with it. Like being very dynamic. So I know Elon has, he's gone pretty far out in one direction with grok, but it can take on personas very well. We've not really run into issues. That was a, like one night we were like, Hey, we need to start making this a bit better, more, more engaging. And it was a pretty simple implementation to get those up. I wouldn't say that was an issue for us at all. Okay. The level of connection that you need to have for it to be realistic to a phone call at a contact center is minimal compared to if you were trying to create an AI persona that you're dating or talking to for therapy or any of those things. And that's really what we're trying to focus our use cases on is what does it work for today? Not just for the personality, but privacy, security, scalability, all those different things. is when you can walk into the training room and see 15 people having 15 different conversations at the exact same time with almost no latency that fast. That's just one step on our way towards many of these contact centers have 10,000 agents spread around the globe, not even learning in one place. So how can we get that into everyone's hands in a way that meeting meets enterprises where they're at? I do think VR headsets you mentioned earlier is where we're heading to, but today it's very simply, how does this work for my headset anywhere? When I worked at Microsoft, I worked on the support line for a day and went through a little training. Obviously the training you'd have to do to work at Microsoft is going to be different than the training you're going to have to work at Microsoft is going to be different than the training you're going to have to work at John Deere selling tractors, right? How did you guys fine tune the system to add on that real specialized training for a specific company? Yeah. A lot of it right now, we're very small. So a lot of it's manual and it doesn't scale. So we're starting here and we have a lot of assumptions we're making about the progression of LLMs and our ability to automate all these processes later. But right now we've had a lot of success just manually setting up embedding pieces for these are the phases of a script. We'll manually take a script that they'll send to us in like a doc file and we'll copy paste it into a system and annotate things like this is how the flow graph goes through the conversation and just try to make sure everything works technically and then just lots of testing. So we'll run through it. I've spent hours staying up sometimes all nighters, like running through simulations and stuff, trying to tweak it to be just right. But we have a lot of assumptions like those pieces. It's not that hard to parse text for an LLM right now. And so it can also output things such as JSON. So if we give it a good understanding of our APIs that we use, we think that with enough development, we could easily take an LLM and convert plain text scripts directly into some structured format. I bet a lot of people are asking, particularly people who run call centers are asking what, why can't the AI do the call center themself itself?


Why cant AI do the calling itself? (11:41)

Why not get rid of the human? Sounds like you still feel there's a role for the human in this whole thing. Yeah, definitely. Adam. And you say that because the people who run the call centers actually have been exposed to a lot of AI tech longer than many other industries. Some of this stuff within call centers has been around for two or three years that we're leveraging and their response has been actually like, we, we think that the next five to 10 years, you're still very much need people. And so within call centers and contact centers, I think they all are looking for how do we turbocharge our agents and understand that the agent's role is certainly going to change and the most repetitive tasks are going to be automated. But they also have inside access into how much different types of data that they're processing on a daily basis and how difficult it is to apply any generalized model across that is really hard. And to do it in a safe and compliant way is really the biggest question that we find from our clients. And you had mentioned there's some healthcare auto agents out there. Last week, one of them had a massive data breach and that comes up in half of our calls now, right? So it's how can you do this in a way that is not just going to supersize our output, but also do in a way that everyone's data is secure. Yeah, that's real important. Back to the humans. Training in the old way of training, you go into a classroom and you'd have a, somebody with a slide deck at the front or something like that. The new way of training is to do it just in time. Maybe even do it while you're doing your job, right? While you're on the call, maybe there's an assistant that's helping you keep track of where you are in the script and what you're supposed to be saying next and how you did even, right?


Training agents in call center (13:35)

Avi Shiffman is making a little pendant and he's an AI pendant. He wears it to business meetings. And while he's in the business meeting, it's telling him, oh, you should have treated Joe better because you were free road rude to Joe, he's not going to fund your company, right? It'll be talking like that to you in your ear. So are you building that same kind of system for the call center employee that maybe on the screen there it's helped It's listening to them answer questions and it's helping them out. You actually pretty much read exactly where we're heading, not just in the future, but pretty much tomorrow. Next month, we're rolling out our first annual contract with a call center that's going to be putting us on to 1.4 million minutes a month where we are live assisting their agents in how to resolve conflicts and create more sales as it happens so right down to the second of when someone needs that assist that extra nudge of training we're going to be there with these custom embedding models that Jake is whipping up so that we can, without even having to ask the question, supply a little bit of coaching.


Organized data and classroom (14:49)

Now, there's also a chatbot component where if the agent themselves wants to ask a question, they now don't need to raise their hand and try to find a trainer. You just type it in and you're talking to all of your data across the organization. find a trainer, you just type it in and you're talking to all of your data across the organization. So those things live on the call are amazing. What we can do after post-call in terms of CEO level insights into how is my contact and call center operating? What are the main customer concerns? Am I properly addressing them? That's all stuff that we are doing and other people are doing as well. But what we're trying to really set ourselves apart in is how do you take that data and then reflect it back into what does happen in the classroom? Because a lot of people are just leaving it at, okay, if I just help them on the call, that's it. But no, there is some level of what happens before they get on those calls, where if you could be training it before the first live call, then you're going to see better uptake throughout better ramp time and less wasted money on your ROI. Jim Collison, I bet you, you have a favorite kind of company when you walk in and they have all their data, all their Q and a and nice text files and all organized for you and fine training based on their data set goes real smooth. But then there's other companies that are, you already said some of the companies are still on paper, right? So what does the company have to do to get ready for assisting their call center? Yeah, we're trying to make it so they don't have to do anything. We want to be able to ingest, thankfully they don't have to do anything. Yeah. We want to be able to ingest, thankfully, all the ones who do it on paper. They also have a docx file that they just print off. So they're also, they have some digitized format. As a developer, I'm not a big fan of parsing docx files, but it's doable. And so we're trying to build up, like many companies now, a way to ingest a lot of different data sources, whether it's like on a SharePoint drive or on a Google workspace, wherever you store your company information and documentation. So we're trying to build a model that can parse and properly chunk all of those to get a really good embedding piece. Because if you want to tell an LLM the right context of here's some information about our company, you first have to have the embedding model choose where that information is coming from. If the chunks are not done in a good way. So like one example is the hierarchy of information. If you have a header in a document, and then you have a bulleted list of facts about it and maybe the header is things you should not say and then a bunch of things to not say when you chunk it you absolutely have to have that header tied to all of the bullet points because if you don't your embedding model is like all over the place so trying to properly parse all those pieces and chunk them is I think the big piece there.


Preparing for this (17:47)

So the onboarding is still a little bit manual for us. Yeah. Other companies that are trying to prepare for this future, the best advice, and I know you talk to people who run all different types of businesses, you'll probably agree it's, you gotta start playing around with it. Either Jake or I have a data science And I know you talk to people who run all different types of businesses. You'll probably agree. It's you got to start playing around with it. Either Jake or I have a data science or data analysis background, but the more time we spend prompting and engineering those prompts and figuring out what works and what doesn't where in the prompt the data needs to be, the more you're going to understand, oh, this is how I need to organize my data. It's a new world. There's what, 500 companies building AIs for enterprise uses for different kinds of things from lawyers to what you're doing. How are you finding customers? Because everybody's in that, right? How do you find more customers? It's the way that every bootstrap founders gone out and found customers beforehand, which is you pound the pavement and you try to talk to people who have this problem. The interesting part for us is we find a lot of customers who have this problem across a lot of different industries. And the earliest people who are raising their hands are more of our, like Jake mentioned, in one or two years, we are relying upon assumptions that this will work for every industry. And we do think that technology is going to get there to allow us to help people who are coming to us that work in insurance or have developers themselves that they're trying to bring. Right now, because of that implementation piece that you mentioned, we are very much focused on contact centers and call centers so that we can get some repeatable customers. So for us, it's trying to find as many as possible and then helped by the word of mouth of the initial customers that we are working with. They're seeing these crazy results overnight that it's very easy once you do a good job with one to go out and find the next two or three.


How the company was started (19:45)

Yeah. Tell me a little bit about how the company was started and funded. How did you get to Yeah, we're, we're pretty much essentially bootstrapped. Jake and I are the two co-founders and we have some other advisors and part-time, uh, contractors, but it's been us working on this for about six months after we met on X. And it's a great testament to the algorithm that, you know, something Jake posted about an interest he had in education project came across my feed and wanting to continue working in the space, but also pushing out of, or more specifically student, which is a very difficult place to start a business and move that into B2B or even larger up to the enterprise. We just sat for a while and tried to talk with different enterprises that were having troubles training their new hires. We knew the general field and my original idea was very much focused on oriented around and oriented around LMSs, learning management, which is really, when you think about it, just a digitized binder, it's not revolutionary to those 1950s. You walk in here's your binder to learn. Now it's just, I can click it online. And to Jake's credit, he was the first one who kept being like, why are we talking about this as an LMS? It's so outdated. What would be the next iteration of something like this? And then when we actually saw it used in a company, we saw that no one was clicking on the LMS pieces. And that was just sheer luck that we got to work with someone where we could actually go and watch users in the room, use the software. I'm not sure that a lot of software companies have that benefit, but to just sit and watch them click away from anything that was on the LMS and go back to the simulator or even jump in after hours or over the weekend. We had a few people on Thanksgiving, not sure if they were showing it off to their family or just practicing. I'm not sure if they were showing it off to their family or just practicing. But when you start seeing users have that type of response, we were like, okay, I think let's lean into this. We might be onto something. That's cool. For the last question, what are you learning about AI by building this company? Where does AI fall apart? What kinds of things are you hoping Open AI or some other model builder puts into their AI that would make your life better? Yeah.


Putting all of these together (22:09)

Our number one problem we have right now is feedback because responses are pretty easy, but providing really good pointed feedback, it's doable. It's not quite where I'd like it to be. One thing that I'm thinking about right now, I don't know if you've seen the Andre Carpathi YouTube video he just posted recently. Yeah, right. He goes giving. Yeah. Yeah, yeah. It's great. A great overview to LLMs if you haven't worked on them before. One of the pieces is he talks about system two thinking, which is like a deeper level of thought where you can spend more time to get a better output. And that would be for us a huge piece because there's a lot of parts that we need instant response time. But if you could have a deeper level of thinking, like what an expert coming into a business would have, maybe a sales professional who's going to go beyond just what GPT can do. That would be, it would really let us elevate this, not just to new hires that are not experienced at all. Maybe college students or like early in their careers. students or like early in their careers. If we could take that to the next level of helping to train even advanced people in customer support or helping the QA guys, helping managers, directors make better decisions overall. That would be huge. How are you working on latency? Cause when I asked ChatGPT a question, sometimes it takes eight to 12 seconds to answer back, right? It has to go and gather a whole bunch of data and analyze it before it even gives me an answer. So how are you reducing that in the call center? Yeah. Yeah. So a couple routes, one route for things that are very repeated, like an objection that a customer may bring up. It may not seem like something that happens the same way every time, but when you're processing millions of minutes a year, a lot of things are going to come up over and over again. And so we're going to be using the LLMs to train, pull out objections or pull out pieces of a conversation and put them into an embeddings database so that the embeddings can be queried. And those it's like a 20 millisecond operation for the training piece to get that round trip latency. What I talked about with transcribing and then sending it through an LLM and then doing text to speech. We use Amazon Polly right now to do the text to speech. It's insanely fast built for this sort of a purpose, like millisecond latency, tens of milliseconds. And then we use our custom whisper GPU. We run the model locally so that we can get very fast response times also in milliseconds. GPT, like you mentioned, that's the biggest hang up, right? What I've seen from three, five, is to get the first token response, it's generally one to three seconds, unless it's like overly bogged down. And while three seconds is not ideal to have a good feeling of you're in a conversation where it tends to go towards the one second point. It's okay. And we don't wait for the entire response to come in as the responses are streaming in. We'll take a sentence and then send that off to Polly to convert it to speech, immediately send that audio down. And then the client side can just buffer those audio segments and play them back to back. So it sounds natural. A lot of, a lot of engineering for sure. So we're, you should have seen the first weekend before or week before we figured out all those things and we really, we were essentially trying to have a bunch of users on a demo and pretty much every single thing I broke. And I think Jake got three hours of sleep that week, but that's again, to the benefit of actually getting to build stuff quick, break it quick, and then move forward from there. How common are the questions and answers? And can you cache any of those beforehand so that they could be really fast and not even need to go to the LLM? Exactly. Exactly. Yeah. That's our big question right now, uh, as we're working on scaling into these millions mark, but yeah, a lot of the, a lot of the objections are very similar to the point where. Like when we're doing assistance live on a call, we're not actually sending audio to the end user, right? We're just going to show a prompt. Here's a way to overcome this objection. And then the salesperson can riff on it or the customer agent. But pulling that out, objections generally fall into specific categories. And there's only so many, like I need to talk to someone else. Maybe it's a spouse or a parent or something or i i can't afford this right now or pieces like that or maybe like my car broke if you're calling an insurance company like i got in an accident or someone hit me and like these pieces just okay how do you immediately respond to that on a call that's going to be very standardized and it doesn't matter how they phrase it because an embeddings model will be very good at matching it to the correct piece. And it's one of those cases where, because we have a human in the loop and we believe strongly in that, we don't have to worry about if it hallucinates sometimes or gives the wrong answer. If it works 99% of the time, it's helpful and a human can help to smooth out the other 1% of the time. it's helpful and a human can help to smooth out the other 1% of the time. It's really interesting. It's going to be interesting to track you over time as you build this system more and take over more and more pieces of what the human used to do, right? What a world.


Additional Resources

Resources & how to learn more (28:00)

Where do we learn more about you? Yeah, you can check us out. Our website is ready. R E D Y dot IO. And you can follow Jake on Twitter. He's way more active there than I am, but he's a great follow. As you've mentioned some of the better engineering posts out there. We hope to keep you in the loop on those updates and come back on. Thank you so much for what you do. And it's part of my work to get around the companies like yours who are building things for the enterprise. Most of the media people don't pay attention to enterprise startups because they're aimed at a smaller audience, right? TechCrunch doesn't like writing about them. I appreciate it a lot because I'm learning a lot by talking to people like you who are trying to build these systems and sell them to real customers. So thank you. And thanks for being out there and just leading forward the community constantly with just thought leadership and all the fun ways that you get to apply this technology. The best tool that's come around in Jake or I's lifetime to help people. And that's what we want to do.


Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Wisdom In a Nutshell.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.