youtube-transcript.ai

Ultimate AI Agents Masterclass for Founders & Marketers | Rethinking Marketing with AI Agents

Watch with subtitles, summary & AI chat
Add the free Subkun extension — works directly on YouTube.
  • Watch
  • Subtitles
  • Summary
  • Ask AI
Try free →

The discussion demystifies AI agents, differentiating them from traditional automation by highlighting their problem-solving capabilities and adaptability. It emphasizes that effective AI agents require structured reasoning, memory, and action, built upon layers of research (secondary, historical, and primary data) and clear objectives, rather than a single, overloaded agent.

Full Transcript

https://www.youtube.com/watch?v=GtwVwqPiouE

[00:02] Atropic CEO saying that more than half of the entry- level jobs will disappear.
[00:07] We've never had a technology that's this disruptive.
[00:10] It's bigger and it's broader and it's moving faster than anything has before.
[00:15] None of us should be scared of AI.
[00:17] One just has to figure out how to work with it.
[00:18] Then you're riding the title.
[00:20] People today, the entire world is buzzing with chatter about AI agents.
[00:22] But very few people are actually teaching you how to build them and how to use them for large enterprises.
[00:28] And most of the information that you have is coming from reals where somehow people present magical outputs using simple prompts.
[00:34] But that is actually very very shallow.
[00:36] So today we have Kedar who has had an amazing run at Amazon India and now heads marketing for Kotak Mahindra Bank and he's here to teach us what exactly is a multi-agent system, how to build a multi- aent AI system and how to take it step by step towards execution without any flaws.
[00:55] So if you've been watching reals on AI, that's not the right way to go about it.
[00:58] You have to go deep and actually understand how to get the best output at
[01:03] understand how to get the best output at an organizational level.
[01:05] And that an organizational level.
[01:08] And that requires some deep deep insights.
[01:10] And on top of this podcast, we are going to build a playlist which will then help you understand how to use AI agents for sales and how to build an AI agent to run your entire organization with minimal people.
[01:17] So consider this to be the first principle lecture on how to build a multi- aent system.
[01:21] What is the difference between automation and AI agents?
[01:26] A vending machine is a traditional automation.
[01:30] Fixed input, fixed output.
[01:32] AI agent, it's like a coach.
[01:34] You're letting your team players perform the way they want, but you still want them playing in certain contours.
[01:39] How should I build an effective AI agent all the way to deployment?
[01:43] AI system is being built uh on three vectors.
[01:45] Let's call it the traffic d.
[01:47] Cost is what is most pertinent.
[01:49] Time is important.
[01:52] And the third is quality.
[01:54] I think the the mistake is building.
[01:57] Can you show me a demo of how an AI agent helps you achieve a certain outcome with minimal instructions?
[02:02] Solitaire credit card benefits.
[02:02] So that's it.
[02:03] So that's it.
[02:03] That's your brief like a four-word brief.
[02:04] four-word brief.
[02:04] Frequent traveler unlock a complimentary
[02:07] Frequent traveler unlock a complimentary flight after spending 1.5 lakhs.
[02:10] Apply now.
[02:10] This agent packed with all the
[02:12] This agent packed with all the capability should be able to do the job of this also.
[02:35] Hi Kar, welcome to the Indian business
[02:37] Hi Kar, welcome to the Indian business podcast.
[02:37] Kdar, something absolutely crazy is happening in the agentic AI
[02:39] crazy is happening in the agentic AI space and it is both exciting and scary
[02:40] space and it is both exciting and scary at the same time.
[02:43] You know the anthropic CEO saying that more than half of the
[02:45] CEO saying that more than half of the entry- levelvel jobs will disappear and
[02:48] entry- levelvel jobs will disappear and unemployment could hit 10 to 20%.
[02:51] A 5% unemployment rate is considered to be
[02:54] unemployment rate is considered to be scary for an economy and he's saying
[02:56] scary for an economy and he's saying that unemployment rates will touch 10 to
[02:58] that unemployment rates will touch 10 to 20%.
[03:01] 20%.
[03:02] One of my friends actually built a 100
[03:04] One of my friends actually built a 100 cr company using AI agents and that is pretty exciting.
[03:09] And every time a news comes out usually it's a scary news.
[03:12] There's a lot of pessimism in the society today and you know there's a lot of anxiety also including in my own team.
[03:18] But I think every threat can be turned into an opportunity with proper knowledge and I want this podcast to turn this fear into excitement.
[03:30] So today I want to understand a few important things.
[03:35] Firstly I want to understand what exactly are AI agents.
[03:37] This is meant for those people who are right now running a traditional business or people who don't know anything about AI agents because usually people get confused between automation and AI agents.
[03:45] Secondly, I want you to throw light on the first principles of building an AI agent.
[03:48] This is again for two reasons.
[03:51] If somebody wants to build an AI agent, they should know step by step on how to approach it.
[03:54] Secondly, if they outsource it to a company, I want them to not be fooled by so-called agent AI companies because I
[04:05] so-called agent AI companies because I see a lot of that happening, including with my own self.
[04:10] I want you to use real world examples, including examples from your own company to help me understand the gravity of the situation.
[04:18] And hopefully I and several other entrepreneurs who are watching this episode will then go back home excited thinking about the possibilities that we can achieve because of agentic AI.
[04:29] That is the agenda for today's episode.
[04:30] Firstly, let's start with basics.
[04:32] Help me understand the difference between automation and agents because until last week before I met you to me both of them were the same.
[04:40] So what is the difference between automation and AI agents?
[04:45] Uh look at look at it this very very simply right like have you ever tried getting something out of a vending machine.
[04:51] Yes.
[04:53] All right. So a vending machine has very set things. One for milk, two for biscuit, three for juice, right?
[04:59] And then you choose one of the three. Press it. After that you're supposed to scan the QR code. You pay it. Once it is paid
[05:06] the QR code.
[05:06] You pay it.
[05:06] Once it is paid then either a biscuit or a or a milk or juice something will come out.
[05:11] juice something will come out.
[05:13] Now in that process if let's say you've pressed one and you've done the payment but the payment has gone through and the machine hasn't received it then the vending machine doesn't know what to do.
[05:21] has gone through and the machine hasn't received it then the vending machine doesn't know what to do.
[05:23] It'll basically not give you the biscuit that you would probably ordered.
[05:24] doesn't know what to do.
[05:26] not give you the biscuit that you would probably ordered.
[05:30] This is automation where there are predefined steps one for milk two for biscuit three for X and there's only predefined flow.
[05:32] milk two for biscuit three for X and there's only predefined flow.
[05:34] Anything outside of that is not what the system is designed to delivering.
[05:37] there's only predefined flow.
[05:39] outside of that is not what the system is designed to delivering.
[05:41] is designed to delivering.
[05:41] That is traditional automation.
[05:43] I want you to think of a smart employee.
[05:46] Someone who would have joined your team and is a great performer.
[05:48] would have joined your team and is a great performer.
[05:50] great performer.
[05:50] All you tell your team member is hey a business needs to grow tox from where it is today.
[05:53] All you tell your team member is hey a business needs to grow tox from where it is today.
[05:55] And for it to grow, what would be our plan?
[05:57] How would you go about it?
[05:59] Great employee would go back do a deep dive on understanding why the numbers are not moving in certain direction.
[06:01] dive on understanding why the numbers are not moving in certain direction.
[06:03] are not moving in certain direction.
[06:05] would go back, put a plan together,
[06:08] would go back, put a plan together, execute it for a week, a month,
[06:12] execute it for a week, a month, understand if it's not working, rework
[06:14] understand if it's not working, rework it, figure out ways on why it is not
[06:17] it, figure out ways on why it is not working and reject the plan, junk it.
[06:20] If it is not working, improvise it if it's
[06:21] it is not working, improvise it if it's working and then come to you perhaps
[06:24] working and then come to you perhaps once the number is either achieved or do
[06:27] once the number is either achieved or do a mid check with you saying that okay,
[06:29] a mid check with you saying that okay, this is moving in the direction that we
[06:31] this is moving in the direction that we probably want it to be.
[06:33] This smart employee is an AI agent.
[06:36] A vending machine is a traditional automation.
[06:39] A smart employee is an AI agent.
[06:41] smart employee is an AI agent.
[06:43] A problem solver.
[06:45] Someone who would pick a goal that is being given to it and go relentlessly
[06:48] after the goal and not stop till it achieves the goal.
[06:52] That's an agent.
[06:54] Got it.
[06:55] If you really want to know how an AI agent really works, right, I would just
[06:56] agent really works, right, I would just break it into five steps.
[06:57] Something which first perceives.
[06:59] Now it'll read the brief, the data, the context.
[07:02] Having read that, it'll reason out, you know, decide the best plan for action.
[07:05] It'll
[07:09] decide the best plan for action.
[07:09] It'll then go and act, execute the tasks,
[07:11] then go and act, execute the tasks, multiple in parallel, and then evaluate,
[07:14] multiple in parallel, and then evaluate, check, did they meet the goal, and then
[07:16] check, did they meet the goal, and then from there, this is the beauty, it'll
[07:18] from there, this is the beauty, it'll adapt.
[07:21] So that's how an agent kind of relentlessly operate and and will always
[07:25] relentlessly operate and and will always stop only once the goal is achieved.
[07:27] stop only once the goal is achieved.
[07:29] It will only stop once the goal is achieved.
[07:31] For example, if I have an AI agent to make motion graphics asset for
[07:35] agent to make motion graphics asset for this podcast,
[07:37] this podcast, traditional automation will give me
[07:39] traditional automation will give me outputs that I may consider
[07:42] outputs that I may consider usable.
[07:43] usable.
[07:45] But when it comes to an AI agent, the same process will be followed, but it
[07:48] same process will be followed, but it will automatically understand the
[07:49] will automatically understand the context of this podcast, design assets
[07:52] for it, and then analyze those assets to
[07:54] for it, and then analyze those assets to check if those assets are good enough.
[07:56] check if those assets are good enough.
[07:57] If they're not good enough, it will then get back into the loop and then do the
[07:59] get back into the loop and then do the task all over again.
[08:01] task all over again. Is that correct?
[08:03] That's right. Add to it. Ganesh has a certain bar of a motion graphic.
[08:05] certain bar of a motion graphic.
[08:06] Okay.
[08:08] It'll go back and check whether it meets the bar that it has. And if it gets an
[08:10] the bar that it has.
[08:12] And if it gets an answer that it doesn't meet, of course, answer that it doesn't meet, of course, it's in the constraints that you would it's in the constraints that you would have put into your prompt.
[08:15] So, it'll go back and check and if it doesn't, it'll go back.
[08:17] It'll keep it.
[08:19] So whenever an agent is working at the back end and if agent is working at the back end and if you have put the right checks and the right bar for it to operate it'll go back keep checking you wouldn't even know how many times it would have probably checked unless you get into the the actual working of it but yeah it'll be relentless in what you ask it to do for you u just like a smart employee kar
[08:21] you have put the right checks and the
[08:23] right bar for it to operate it'll go
[08:26] back keep checking you wouldn't even
[08:27] know how many times it would have
[08:28] probably checked unless you get into the
[08:30] the actual working of it but yeah it'll
[08:32] be relentless in what you ask it to do for you u just like a smart employee kar
[08:35] this seems absolutely magical but I want
[08:38] to understand this using a real world example now you worked at Amazon and right now you're working at Kotak now as a large company have deployed AI agents and can you show me a demo of how an AI agent helps you achieve a certain outcome with minimal instructions?
[08:40] this seems absolutely magical but I want to understand this using a real world
[08:41] example now you worked at Amazon and
[08:45] right now you're working at Kotak now as
[08:47] a large company have deployed AI agents and can you show me a demo of how an AI
[08:49] agent helps you achieve a certain
[08:52] outcome with minimal instructions?
[08:55] >> Let's say there's a campaign that I there's a static creative that you want, right?
[08:57] Like static digital creative that you want a banner or image or whatever it is for for all your digital distribution.
[08:59] Uh let's say solitaire
[09:01] there's a static creative that you want, right?
[09:03] Like static digital creative that you want a banner or image or whatever
[09:05] it is for for all your digital
[09:07] distribution. Uh let's say solitaire
[09:11] distribution.
[09:14] Let's say solitaire.
[09:17] Um solitaire credit card uh benefits.
[09:19] Uh benefits.
[09:21] That's as vague a brief as one can send.
[09:21] Right?
[09:22] That's it.
[09:23] That's this is all that.
[09:25] But who are you sending to?
[09:27] What benefits are you going to highlight?
[09:29] And who's it being distributed to?
[09:30] That's a great employee working at the back end.
[09:33] The agent that we spoke about.
[09:34] So that's it.
[09:36] That's your brief like a four-word brief.
[09:38] Okay.
[09:40] All that you do is just type it out as vague as this.
[09:42] There's a dedicated email address which is the agent's email address and you just send.
[09:45] Now in less than 5 minutes you will probably see a complete static campaign generated for solidair.
[09:55] Uh credit cards and one can go about it like this is not just it right.
[10:01] Let me show you another while it generates the uh static uh campaign.
[10:09] Let's try and see what it does for a video.
[10:09] Now videos is
[10:12] what it does for a video.
[10:14] Now videos is as we know a very very long process, as we know a very very long process, right?
[10:17] Uh I'm preaching the choir but it takes so much you know from understanding what you want to shoot to finally doing a production and then you know editing it multiple times for it to be pa perfect and then finally putting it out.
[10:28] We used to take easily about a week or more even if it's a small uh let's just say 78 second or a pre-roll XYZ right now here is the email address let's just say video and I need uh another credit card that we have air plus uh free flight offer bottom funnel creative that's it so you press the send button it's a oneline brief again.
[10:58] That's it. Now, what's happening in the background is very interesting.
[11:01] So, you have a we've got a creative agent.
[11:09] The creative agent has been defined to deliver creatives to KPIs.
[11:12] So, this is the process where you send in, you've
[11:14] the process where you send in, you've just given the agent a task saying that just given the agent a task saying that create a campaign for me for Airbus for free flight offer.
[11:20] Create a campaign for free flight offer.
[11:22] Create a campaign for me sol credit card explaining the benefits of it.
[11:27] That's it. and you have the campaign in less than 5 minutes.
[11:29] We'll perhaps show the outputs uh for us to also see what what comes through.
[11:34] All right. So this is how the email would come.
[11:36] Um it's an HTMLized email. So HTML email you know how much uh of a a challenge it used to be to to kind of get an HTML email done.
[11:45] So now the teams get it in less than five minutes.
[11:48] Let me just see the creative.
[11:50] Okay. So what it has done is that look u it's picked up this is the solitire credit card explain the benefits it's picked up turn travel into premium experiences.
[12:01] it's now talking about the benefits on travel uh that it has it has picked up enjoys zero forex markup and limitless global access
[12:14] forex markup and limitless global access effortlessly but how did it specifically pick zero forex markup like there could be tons of features in a credit card why did it specifically choose choose to highlight zero forex markup.
[12:24] So two three things right one is that what it what it also evaluates is what is the driver for adoption and what is the trigger for trial for a particular segment and basis that it is identified that for this segment around this time perhaps travel benefits are the most important ones within travel benefit for an affluent forex charges as you go outside India and perhaps even the the expenses on your unlimited domestic and international lounge access and earn accelerated air miles on travel.
[13:01] So if you look at the additional benefits that it has picked up and what it has picked up as a lead benefit all are stemming from what the segment it is talking to.
[13:10] But you didn't specify any consumer segment right?
[13:11] Uh solitaire is for affluent and that is
[13:16] Uh solitaire is for affluent and that is what the agent has been given as a what the agent has been given as a context again.
[13:21] So it has mapped the brief of solid air at the back end and said that for my particular product this is my TG and then for that TG it kind of optimizes and brings through the creative that will work for for that TG.
[13:32] So uh which is why if you look at it it's a very premium world that it has created.
[13:36] Yeah.
[13:36] Uh it's also a lounge uh which looks very very premium.
[13:39] The characters are having cross well probably picked up something topical I guess.
[13:45] Uh but that's that's the whole idea when you when you look at these they they are um they they come to you with no effort at all.
[13:47] But Kar tell me something help me understand this better.
[13:53] Okay you just gave it a fourword brief and it gave you this output.
[13:57] Now I assuming that the backend processing happened like this.
[13:59] Firstly the agent knows the customer very well.
[14:02] It knows the different segments of the customers that exist for Cotex auditor and this input was given by you.
[14:05] Now amongst
[14:17] Input was given by you.
[14:17] Now amongst these segments let's say we have HNIs.
[14:21] These segments let's say we have HNIs, we have travel enthusiasts.
[14:24] We have travel enthusiasts.
[14:24] So three features that they might find absolutely relevant are number one HNIs.
[14:30] Relevant remittance.
[14:32] Remittance or travel enthusiast.
[14:34] Remittance or travel enthusiast, obviously air miles or lounge access.
[14:40] Obviously air miles or lounge access.
[14:40] So uh so budget travelers when you think of cards and travel there are budget travelers.
[14:44] There are luxury uh travelers.
[14:46] There are experience seekers.
[14:47] Experience seekers.
[14:54] Know your customer step.
[14:56] Know your customer step understood that there are three major segments HNI travel enthusiasts and experience seekers.
[15:03] Experience seekers.
[15:03] Now how did it decide to pick only travel as a lucrative proposition and not remittance or adventure?
[15:13] I think firstly when you think of kota credit card and the card that we've picked.
[15:17] card and the card that we've picked right solidire.
[15:20] So if you look at solitaire.
[15:23] So if you look at solitaire the solitaire the affluent segment in the in the affluent affluent segment in the in the affluent segment perhaps the biggest uh credit card benefit is a the lounge access for the family and b the overall forex charges that they have to incur.
[15:29] card benefit is a the lounge access for the family and b the overall forex charges that they have to incur.
[15:33] the family and b the overall forex charges that they have to incur.
[15:37] But who gave it that input?
[15:38] gave it that input?
[15:39] Now this is a research which the agent runs at the back.
[15:42] runs at the back.
[15:43] There are a lot of secondary researches that happen around card usage and knowing which benefit fit best for which segment.
[15:45] secondary researches that happen around card usage and knowing which benefit fit best for which segment.
[15:48] best for which segment.
[15:51] There are also a lot of historical data that we have about consumers that help us understand a consumer better.
[15:54] lot of historical data that we have about consumers that help us understand a consumer better.
[15:56] about consumers that help us understand a consumer better.
[15:58] And thirdly, there are a lot of primary researches that we had conducted which were helping understand behavior of credit card India.
[16:02] are a lot of primary researches that we had conducted which were helping understand behavior of credit card India.
[16:04] had conducted which were helping understand behavior of credit card India.
[16:06] understand behavior of credit card India.
[16:08] Guys, just to give you some context, when Kedar typed solitaier credit card benefits, a fully designed static came out on the other side, right?
[16:10] context, when Kedar typed solitaier credit card benefits, a fully designed static came out on the other side, right?
[16:13] credit card benefits, a fully designed static came out on the other side, right?
[16:14] static came out on the other side, right?
[16:16] And everyone in the room was impressed by the output. But honestly,
[16:19] impressed by the output.
[16:21] But honestly, the output is not the real story.
[16:21] So the output is not the real story.
[16:23] So every time an AI agent is built for a company like Kotag, it is trained on three layers of research and each layer answers a very very specific question.
[16:30] So please keep this in mind because you might try this agent out and it might not give you the same output for this exact reason.
[16:37] These three types of research are secondary research, primary research and historical data.
[16:42] Secondary research is where the agent looks outward at information that already exists in the world.
[16:48] For example, for Cortex campaign.
[16:51] Secondary research would be the agent scanning public sources like RBI reports, industry studies on credit card usage, compet websites, etc., etc., etc.
[17:00] This research is done to understand the overall market trends and consumer behavior.
[17:04] So it's basically answering the question, what does the outside world already know about the people who are likely to upgrade to a card like Kotak Soliter?
[17:13] If this is clear, let's understand historical data.
[17:14] Historical data is where the agent looks inward at Kotuk's
[17:19] where the agent looks inward at Kotuk's own past behavior and results.
[17:21] own past behavior and results.
[17:24] Historical data would be things like past Solitaer campaigns, the response rates of those campaigns, spend patterns of existing soliter users and which segments have upgraded and which segments have dropped off.
[17:35] Eventually, it will understand what kind of customers turned out to be risky or inactive and which customers are more likely to respond.
[17:42] In simple words, it is based on Kuck's own history of what kind of customers have actually taken and used this card before.
[17:50] So in simple words, if you have to solve for historical data, you need to have historical data recorded with you so that you can provide it to the agent by which the agent will be able to make better decisions.
[18:02] If this is clear, let's come to primary research.
[18:04] Primary research is fresh firstirhand data collected specifically for this campaign.
[18:08] For Kotak, primary research could be running a survey of 10,000 existing card holders or a quick interview with a small set of customers or AB testing of two different solitire offers and seeing which one people click.
[18:20] offers and seeing which one people click more.
[18:24] more. This answers the question, what do our customers tell us right now about what they want?
[18:30] Okay, right now. And that is how the campaign will get more and more optimized over time because it will have historical data, it will have primary data and it will have secondary data from both primary research and secondary research.
[18:42] If this is clear, let's move on with the podcast.
[18:43] This is the video uh which we just got uh which is basically the oneliner brief that we had Air Plus and an offer.
[18:53] Let's see.
[18:54] Frequent traveler unlock a complimentary flight after spending 1.5 lakhs.
[18:59] apply now.
[19:03] So I'm assuming this is a performance marketing story ad.
[19:05] The brief was bottom of the funnel.
[19:07] So it has optimized for an action which is why it gets straight into the benefit.
[19:12] Uh it doesn't talk about a backstory or anything.
[19:14] It's it's literally straight out saying that okay are you frequent traveler then a free flight is for you.
[19:19] Just click now and and get it.
[19:19] So that's
[19:21] Just click now and and get it.
[19:21] So that's how it is optimized is it has reasoned how it is optimized is it has reasoned out that this is not going to be a story.
[19:25] it has reasoned out that this is has to be a 8secer uh and that's what uh it kind of optimized it for.
[19:33] Has it picked the character on the basis of the data that you fed?
[19:36] It's basis the target group that AirPlus has been designed for.
[19:41] Again, this is again stemming from the data that you provide.
[19:43] Right. That's right.
[19:44] So it's it's a it's a brief given to the agent.
[19:45] So when you're telling your agent that okay when you think of Air Plus now AirPlus has this TG segment.
[19:49] So you go and create for this TG segment.
[19:54] So hence it is picked.
[19:56] It has made him look real.
[19:58] So which is why you see the character is very real.
[20:00] His beard is is is probably a frequent tablet which is why he's chilled out in the way he's been shown.
[20:04] So that's how the agent has reasoned out uh and and that's that's where uh the casualness you see the background picked a hotel lobby uh and and it's about traveling and and that's where you get a free flight ticket.
[20:07] So yes, a lot of
[20:22] free flight ticket. So yes, a lot of context will have to be given to the
[20:24] context will have to be given to the agent. Uh and that context is almost
[20:27] agent. Uh and that context is almost like a configuration of your agent. Once
[20:29] like a configuration of your agent. Once you do that, then your agent will start
[20:31] you do that, then your agent will start functioning the way you want it to. Kar
[20:33] functioning the way you want it to. Kar before we dive into the configuration
[20:34] before we dive into the configuration part so that people can understand how
[20:36] part so that people can understand how to build this agent. I want to
[20:37] to build this agent. I want to understand how much money do you
[20:38] understand how much money do you actually save by building these AI
[20:40] actually save by building these AI agents? And I'm asking this for two
[20:41] agents? And I'm asking this for two reasons. If there's an executive
[20:43] reasons. If there's an executive watching this operating at a large
[20:44] watching this operating at a large company, they should be able to go and
[20:45] company, they should be able to go and propose this to upper management. And
[20:49] propose this to upper management. And if a small entrepreneur is watching this
[20:51] if a small entrepreneur is watching this episode, they should be able to
[20:54] episode, they should be able to implement it in their company by
[20:55] implement it in their company by understanding the cost. So how much
[20:58] understanding the cost. So how much money do you save through this AI agent
[21:00] money do you save through this AI agent and how much does it cost to build?
[21:02] and how much does it cost to build? >> I would look at the impact of having
[21:05] >> I would look at the impact of having agents AI systems being built uh on
[21:08] agents AI systems being built uh on three vectors. Let's call it the
[21:09] three vectors. Let's call it the trifecta. Let's say cost is what is most
[21:13] trifecta. Let's say cost is what is most pertinent. Time is important. And the
[21:16] pertinent. Time is important. And the third is quality. These three really
[21:20] third is quality. These three really define how which system do you really
[21:22] define how which system do you really want to go to and and where do you want
[21:24] want to go to and and where do you want to put your monies on. Uh in terms of
[21:28] to put your monies on. Uh in terms of time,
[21:30] time, we've just seen it has taken less than 5
[21:33] we've just seen it has taken less than 5 minutes to get a video and an entire
[21:36] minutes to get a video and an entire static campaign onto your email. It's
[21:39] static campaign onto your email. It's safe to assume that any task wherever
[21:43] safe to assume that any task wherever there was a handoff that was happening
[21:45] there was a handoff that was happening between 18 to B team is completely wiped
[21:47] between 18 to B team is completely wiped out. So what used to take months for
[21:51] out. So what used to take months for campaigns can essentially take you just
[21:54] campaigns can essentially take you just few days after doing the right quality
[21:56] few days after doing the right quality check so on so forth. So so that's the
[21:58] check so on so forth. So so that's the way in which the entire time vector has
[22:01] way in which the entire time vector has been collapsed. It's it's easy to say
[22:03] been collapsed. It's it's easy to say that it's probably the system the
[22:07] that it's probably the system the processing time of the system is equal
[22:10] processing time of the system is equal to the time of getting a output done for
[22:12] to the time of getting a output done for a campaign.
[22:12] a campaign. >> But tell me something as far as image is
[22:15] >> But tell me something as far as image is concerned I'm pretty much convinced that
[22:17] concerned I'm pretty much convinced that this image will do as good as an image
[22:19] this image will do as good as an image made by a designer. But when it comes to
[22:22] made by a designer. But when it comes to video that still looks a tad bit AI. My
[22:26] video that still looks a tad bit AI. My question is do those creatives generate
[22:28] question is do those creatives generate the same ROI as natural creatives? Let's
[22:32] the same ROI as natural creatives? Let's let's break uh marketing in three parts
[22:34] let's break uh marketing in three parts right like that is of course top of the
[22:36] right like that is of course top of the funnel which is creators which are for
[22:38] funnel which is creators which are for brand love and things that you're
[22:40] brand love and things that you're creating such that consumers connect
[22:42] creating such that consumers connect with it truly for example so Hla
[22:45] with it truly for example so Hla campaign that we had is a great example
[22:47] campaign that we had is a great example of brand love it was created purely for
[22:49] of brand love it was created purely for consumers to connect with the bank
[22:51] consumers to connect with the bank knowing that this is a bank for some for
[22:52] knowing that this is a bank for some for someone like me who has dreams and hence
[22:55] someone like me who has dreams and hence I want to I want to go to a bank which
[22:57] I want to I want to go to a bank which understands my dreams so the transition
[23:00] understands my dreams so the transition from eligibility
[23:01] from eligibility to possibilities is what consumers moved
[23:04] to possibilities is what consumers moved and hence they wanted a bank partner who
[23:05] and hence they wanted a bank partner who would say okay I have a I have a big
[23:07] would say okay I have a I have a big audacious dream I want a partner and a
[23:10] audacious dream I want a partner and a bank can be a great partner that's where
[23:11] bank can be a great partner that's where Kotch chimed in and said that yes
[23:15] Kotch chimed in and said that yes if you have a big bold dream we will
[23:18] if you have a big bold dream we will back you so that's example of top of the
[23:20] back you so that's example of top of the funnel where you're not expecting
[23:22] funnel where you're not expecting someone to immediately go and say give
[23:24] someone to immediately go and say give me a credit card you're just wanting
[23:27] me a credit card you're just wanting them to say oh this is a brand that
[23:29] them to say oh this is a brand that speaks like me. It's a brand that I that
[23:32] speaks like me. It's a brand that I that I would love to connect. It's a brand
[23:33] I would love to connect. It's a brand that I want to interact. That's what a
[23:35] that I want to interact. That's what a brand love would do. Now, in case of
[23:38] brand love would do. Now, in case of brand love communication, you're right.
[23:40] brand love communication, you're right. I don't think videos at large on AI,
[23:45] I don't think videos at large on AI, they still look AI. So, what happens is
[23:49] they still look AI. So, what happens is that you feel that this is little fake.
[23:51] that you feel that this is little fake. And the moment you feel this is fake
[23:54] And the moment you feel this is fake then the whole emotion aspect of what a
[23:57] then the whole emotion aspect of what a brand love communication should do goes
[23:59] brand love communication should do goes for a toss. So we don't deploy
[24:03] for a toss. So we don't deploy uh AI videos on brand love just yet. Uh
[24:09] uh AI videos on brand love just yet. Uh and and
[24:09] and and >> just yet being the most important phrase
[24:11] >> just yet being the most important phrase of all.
[24:12] of all. >> Yeah. The way things are evolving um you
[24:15] >> Yeah. The way things are evolving um you know it it no one knows. So um I hope it
[24:19] know it it no one knows. So um I hope it does. I hope it becomes as real so that
[24:21] does. I hope it becomes as real so that you know it becomes easier for for all
[24:23] you know it becomes easier for for all of us to kind of use that uh use it for
[24:26] of us to kind of use that uh use it for that section of of marketing. uh but as
[24:29] that section of of marketing. uh but as you move to the midfunnel and the bottom
[24:31] you move to the midfunnel and the bottom funnel right the midfunnel communication
[24:32] funnel right the midfunnel communication is essentially about driving
[24:34] is essentially about driving consideration you know about the brand
[24:36] consideration you know about the brand you know uh Kotak will help me in my in
[24:39] you know uh Kotak will help me in my in achieving my dreams but how so middle of
[24:41] achieving my dreams but how so middle of the funnel one should think of I know
[24:43] the funnel one should think of I know about the brand now is this is going to
[24:46] about the brand now is this is going to help me in achieving my dreams how now
[24:49] help me in achieving my dreams how now the how will stem in terms of helping
[24:51] the how will stem in terms of helping consumers explain what the products are
[24:54] consumers explain what the products are these are slightly less emotional if I
[24:57] these are slightly less emotional if I may say
[24:58] may say but a lot moreformational.
[25:00] but a lot moreformational. In cases offormational communication,
[25:04] In cases offormational communication, AI can do a great job because it has to
[25:07] AI can do a great job because it has to come and deliver a message saying that
[25:08] come and deliver a message saying that credit cards will offer you free flight
[25:12] credit cards will offer you free flight tickets or credit cards will give you
[25:14] tickets or credit cards will give you lounge access and probably discounts on
[25:18] lounge access and probably discounts on shopping. Now, that's all that it needs
[25:20] shopping. Now, that's all that it needs to come and say. That set of
[25:23] to come and say. That set of communication is best done by AI.
[25:27] communication is best done by AI. Now you go a layer below. You've seen it
[25:31] Now you go a layer below. You've seen it said this brand is is a brand which will
[25:35] said this brand is is a brand which will help me fulfill my dreams. You looked at
[25:37] help me fulfill my dreams. You looked at a creator say that okay yeah they have
[25:39] a creator say that okay yeah they have great home loans they have great
[25:41] great home loans they have great personal loans and they probably got
[25:42] personal loans and they probably got great business loans which are going to
[25:44] great business loans which are going to help me achieve my business dreams as
[25:45] help me achieve my business dreams as well. Now is the question of how do you
[25:48] well. Now is the question of how do you really make it easy for the consumer to
[25:51] really make it easy for the consumer to act. That is where the bottomfunnel
[25:53] act. That is where the bottomfunnel creative comes in. The bottomfunnel
[25:55] creative comes in. The bottomfunnel creatives objective is to just help the
[25:58] creatives objective is to just help the consumer convert having had the intent
[26:00] consumer convert having had the intent to say I want to bank with this bank and
[26:03] to say I want to bank with this bank and now I want to probably avail loan and if
[26:05] now I want to probably avail loan and if I have to avail a loan this is where it
[26:06] I have to avail a loan this is where it comes and it says that home loans at x%
[26:09] comes and it says that home loans at x% interest rate click now and get it even
[26:11] interest rate click now and get it even this set of communication is perhaps
[26:14] this set of communication is perhaps best done by AI today because this is
[26:17] best done by AI today because this is straight out this is what we saw which
[26:19] straight out this is what we saw which we saw we seen that as a uh overall air
[26:23] we saw we seen that as a uh overall air plus bottom of the funnel creative is
[26:25] plus bottom of the funnel creative is what uh uh you know it's very simple
[26:27] what uh uh you know it's very simple straight out.
[26:28] straight out. >> Okay. Now I am sold on the idea that a
[26:32] >> Okay. Now I am sold on the idea that a and I'm pretty sure
[26:36] how do I build an AI agent because from
[26:38] how do I build an AI agent because from what you've just told me it's a very
[26:39] what you've just told me it's a very complex system. It seems to be a very
[26:41] complex system. It seems to be a very complex system which has a lot of data
[26:43] complex system which has a lot of data input but I don't know I don't have a
[26:46] input but I don't know I don't have a framework to understand what kind of
[26:47] framework to understand what kind of data input should I give and you know
[26:50] data input should I give and you know while other creators also speak about
[26:51] while other creators also speak about agents they also speak about giving
[26:53] agents they also speak about giving access to multiple things within the
[26:56] access to multiple things within the company to the agent so that the agent
[26:58] company to the agent so that the agent can make better decisions because you
[26:59] can make better decisions because you know after we had our first conversation
[27:01] know after we had our first conversation I actually tried making an agent through
[27:03] I actually tried making an agent through perplexity
[27:04] perplexity >> and bro it was a nightmare. I spent
[27:06] >> and bro it was a nightmare. I spent eight hours making that agent and it was
[27:08] eight hours making that agent and it was just so complex because on the outside
[27:10] just so complex because on the outside it seems pretty simple. Just tell
[27:11] it seems pretty simple. Just tell Perplexity to achieve a certain outcome
[27:14] Perplexity to achieve a certain outcome and go ahead and do it. And for me it
[27:16] and go ahead and do it. And for me it was as simple as study the transcript of
[27:18] was as simple as study the transcript of my podcast and tell me where should I
[27:21] my podcast and tell me where should I show motion graphics
[27:22] show motion graphics >> and help me storyboard it. It was a very
[27:25] >> and help me storyboard it. It was a very very simple task but even then it was
[27:26] very simple task but even then it was very very difficult and that's when I
[27:28] very very difficult and that's when I realized that making an agent is not so
[27:30] realized that making an agent is not so easy which is again one of the reasons
[27:32] easy which is again one of the reasons why we're having this conversation
[27:34] why we're having this conversation because had I not tried this I would
[27:36] because had I not tried this I would have been like okay end of the podcast
[27:37] have been like okay end of the podcast great let me just go and build agents
[27:39] great let me just go and build agents but I know it's very very complicated so
[27:41] but I know it's very very complicated so walk me through the first principles of
[27:44] walk me through the first principles of how should I build an effective AI agent
[27:46] how should I build an effective AI agent all the way to deployment
[27:47] all the way to deployment >> agents are the ones who optimize things
[27:50] >> agents are the ones who optimize things for goals they are relentless employees
[27:53] for goals they are relentless employees who would just focus on goals and
[27:55] who would just focus on goals and achieve it for you. But for them to
[27:57] achieve it for you. But for them to achieve that goal, I would want you to
[27:59] achieve that goal, I would want you to think of three parameters. So an agent
[28:01] think of three parameters. So an agent can act and a agent should do the
[28:04] can act and a agent should do the reasoning. An agent needs to have
[28:05] reasoning. An agent needs to have memory. These are the three aspects that
[28:08] memory. These are the three aspects that you would want an agent to do. So act,
[28:10] you would want an agent to do. So act, reason, and memory.
[28:12] reason, and memory. >> Act, reason, and memory.
[28:13] >> Act, reason, and memory. >> That's right.
[28:14] >> That's right. >> Is that sequence important?
[28:15] >> Is that sequence important? >> No, it's not a sequence. But think of
[28:17] >> No, it's not a sequence. But think of these as components uh which are
[28:19] these as components uh which are important for for making an agent. For
[28:21] important for for making an agent. For example, you want to create agent. Now a
[28:23] example, you want to create agent. Now a creative agent needs to have reasoning
[28:26] creative agent needs to have reasoning of a similar creative director in an
[28:28] of a similar creative director in an agency or when you're creating an agent
[28:31] agency or when you're creating an agent which is basically doing the job of
[28:33] which is basically doing the job of direction for you then you would want
[28:35] direction for you then you would want the entire reasoning and ability to act
[28:39] the entire reasoning and ability to act like how a director would be on a set
[28:42] like how a director would be on a set and there you have to really go deep to
[28:44] and there you have to really go deep to be able to then tell the agent to act
[28:47] be able to then tell the agent to act like one. Now if you mix that with a
[28:50] like one. Now if you mix that with a director slash a finance or a producer
[28:54] director slash a finance or a producer or let's say if you mix that with a
[28:56] or let's say if you mix that with a scientist now that's where the agent
[28:58] scientist now that's where the agent starts losing context or that's when it
[29:01] starts losing context or that's when it becomes the the entire memory starts
[29:03] becomes the the entire memory starts becoming overloaded and that's where you
[29:05] becoming overloaded and that's where you see this whole terms of hallucination or
[29:07] see this whole terms of hallucination or let's say it is not able to reason the
[29:09] let's say it is not able to reason the way you'd want. it'll start losing
[29:11] way you'd want. it'll start losing context and hence first answer the the
[29:14] context and hence first answer the the first question that you should ask
[29:16] first question that you should ask yourself is is this going to be doing a
[29:19] yourself is is this going to be doing a specialist role and if it is then am I
[29:22] specialist role and if it is then am I giving it enough reasoning uh uh context
[29:25] giving it enough reasoning uh uh context or am I giving it enough action and
[29:28] or am I giving it enough action and hence the goal for which it is supposed
[29:30] hence the goal for which it is supposed to be uh designing the the way it'll
[29:32] to be uh designing the the way it'll operate. Lastly, what kind of memory
[29:35] operate. Lastly, what kind of memory that I can configure for it? There are
[29:36] that I can configure for it? There are broadly three kinds of memories that one
[29:38] broadly three kinds of memories that one can think of. semantic memory, the
[29:41] can think of. semantic memory, the episodic memory and the working memory.
[29:44] episodic memory and the working memory. You would have seen while you would have
[29:45] You would have seen while you would have been trying some prompts that you would
[29:48] been trying some prompts that you would have entered something gone back and
[29:51] have entered something gone back and then started a new chat and then you
[29:53] then started a new chat and then you realize that okay it's not picking from
[29:54] realize that okay it's not picking from where you left
[29:55] where you left >> and it's like explaining everything that
[29:58] >> and it's like explaining everything that you had done that's perhaps where you
[30:00] you had done that's perhaps where you have not created an agent who does not
[30:03] have not created an agent who does not have a episodic memory
[30:05] have a episodic memory >> I want to go back and ask you a question
[30:08] >> I want to go back and ask you a question see if the agent is basically a superhum
[30:13] see if the agent is basically a superhum if I give all the access possible to
[30:16] if I give all the access possible to that agent. Now whether that's about my
[30:18] that agent. Now whether that's about my balance sheet or my edits, it shouldn't
[30:21] balance sheet or my edits, it shouldn't matter, right? It has all the memory. It
[30:23] matter, right? It has all the memory. It has the it has the memory to be a
[30:25] has the it has the memory to be a creative director, it has the memory to
[30:26] creative director, it has the memory to be a CFO, it has the memory to be my
[30:29] be a CFO, it has the memory to be my chart accountant. Now, if it has all the
[30:31] chart accountant. Now, if it has all the memory, if I give it all the tasks, why
[30:33] memory, if I give it all the tasks, why will it not be able to do it? At the end
[30:35] will it not be able to do it? At the end of the day, agents also have certain
[30:37] of the day, agents also have certain constraints in which you want them to
[30:39] constraints in which you want them to operate. Otherwise the way they wire
[30:42] operate. Otherwise the way they wire these information and the way they are
[30:43] these information and the way they are able to they will find an answer. They
[30:45] able to they will find an answer. They will just be desperate in giving you an
[30:47] will just be desperate in giving you an answer and there the the wiring of these
[30:51] answer and there the the wiring of these information may not be as accurate as
[30:54] information may not be as accurate as one would want it when you are expecting
[30:57] one would want it when you are expecting an agent to operate like just a just a
[30:59] an agent to operate like just a just a specialist.
[31:00] specialist. >> Why? It's an agent right? It has the
[31:02] >> Why? It's an agent right? It has the same memory. For example, let's say
[31:04] same memory. For example, let's say there are two agents. Agent marketing
[31:10] plus account. Why will agent B not
[31:14] plus account. Why will agent B not operate as good as agent A? When you're
[31:16] operate as good as agent A? When you're optimizing it to a particular goal. Now,
[31:20] optimizing it to a particular goal. Now, if it is a conflicting goal, the agent
[31:24] if it is a conflicting goal, the agent will pick one of the two to be able to
[31:28] will pick one of the two to be able to go and do it. And if it fits into the
[31:31] go and do it. And if it fits into the kind of output that it is expected to
[31:33] kind of output that it is expected to deliver, it'll pick up pick it up and do
[31:35] deliver, it'll pick up pick it up and do it.
[31:35] it. >> But why? It is a super agent. No
[31:38] >> But why? It is a super agent. No unlimited capability.
[31:39] unlimited capability. >> It's just designed to giving you
[31:40] >> It's just designed to giving you outputs. So for it, it is all about has
[31:44] outputs. So for it, it is all about has it given an output that you would ask it
[31:46] it given an output that you would ask it to give. And that is where and that is
[31:49] to give. And that is where and that is that is the difference between agents
[31:51] that is the difference between agents and humans where agent lacks common
[31:53] and humans where agent lacks common sense. But why? How does it lack common
[31:56] sense. But why? How does it lack common sense? Yeah,
[31:57] sense? Yeah, >> look AI uh agents are what like if you
[32:00] >> look AI uh agents are what like if you go deeper they they're basically
[32:03] go deeper they they're basically identifying a recogn or recognizing a
[32:06] identifying a recogn or recognizing a pattern and bases that pattern they will
[32:08] pattern and bases that pattern they will try and give you something which matches
[32:09] try and give you something which matches closest to that pattern that they would
[32:11] closest to that pattern that they would have probably known or the kind of data
[32:12] have probably known or the kind of data that you would have fed in for it to
[32:14] that you would have fed in for it to come to a point. That is exactly what
[32:17] come to a point. That is exactly what this this AI agent will also do for you.
[32:19] this this AI agent will also do for you. It'll try and recognize certain patterns
[32:21] It'll try and recognize certain patterns and and bring in something that is
[32:23] and and bring in something that is closest to that pattern. Now there are
[32:25] closest to that pattern. Now there are conflicting patterns. one pattern which
[32:27] conflicting patterns. one pattern which is basically I don't know if if you're
[32:29] is basically I don't know if if you're operating like an accountant versus
[32:31] operating like an accountant versus operating like a a creative director
[32:33] operating like a a creative director they're two different things for it to
[32:35] they're two different things for it to then know what is more obvious or simple
[32:39] then know what is more obvious or simple it may not gauge that because it still
[32:41] it may not gauge that because it still operates in certain contours that you
[32:43] operates in certain contours that you would have defined for the agent to work
[32:45] would have defined for the agent to work >> but this is a human constraint for
[32:47] >> but this is a human constraint for example let's say kar is the CMO of a
[32:49] example let's say kar is the CMO of a company
[32:50] company >> no I don't expect Keddar to know the
[32:52] >> no I don't expect Keddar to know the finances of the company because Kedar is
[32:54] finances of the company because Kedar is a CMO but this is a human constraint.
[32:56] a CMO but this is a human constraint. But if Keddar were to be turned into an
[32:58] But if Keddar were to be turned into an agent,
[33:00] agent, an actual agent with AI capabilities,
[33:03] an actual agent with AI capabilities, Kedar has become a super agent. Now this
[33:06] Kedar has become a super agent. Now this agent packed with all the capability
[33:08] agent packed with all the capability should be able to do the job of a CFO
[33:10] should be able to do the job of a CFO also and the CMO also. It's just that
[33:13] also and the CMO also. It's just that the the complexity in which one could
[33:17] the the complexity in which one could put out the entire prompt for it
[33:21] put out the entire prompt for it >> is where the limitation happens because
[33:24] >> is where the limitation happens because look an agent will always give you an
[33:26] look an agent will always give you an output like I said.
[33:27] output like I said. >> Okay.
[33:28] >> Okay. >> Now how predictable that output is and
[33:30] >> Now how predictable that output is and the predictability of the output is
[33:32] the predictability of the output is defined by how constraints are put for
[33:35] defined by how constraints are put for an agent. The moment you increase the
[33:37] an agent. The moment you increase the scope of the agent then the the the
[33:40] scope of the agent then the the the playing ground for the agent is very
[33:41] playing ground for the agent is very very huge and hence the predictability
[33:43] very huge and hence the predictability of the output is is perhaps
[33:46] of the output is is perhaps lesser as opposed to the moment you
[33:50] lesser as opposed to the moment you constrain an agent to work like just a
[33:53] constrain an agent to work like just a creative director. So when you are
[33:56] creative director. So when you are packing multiple agents together, the
[33:59] packing multiple agents together, the ability to debug and know how a
[34:02] ability to debug and know how a multi-system agent works often comes
[34:05] multi-system agent works often comes from knowing who is taking decision how.
[34:08] from knowing who is taking decision how. And if you are if you are packing two
[34:10] And if you are if you are packing two decisions in one place or two different
[34:13] decisions in one place or two different kind of deep functional expertise in
[34:15] kind of deep functional expertise in one, it could get muddled up for you
[34:17] one, it could get muddled up for you when you're building a complex multi-
[34:20] when you're building a complex multi- aent system. As you would operate as
[34:22] aent system. As you would operate as only one, then still okay. But for for
[34:26] only one, then still okay. But for for bigger bigger enterprise agents or when
[34:29] bigger bigger enterprise agents or when you're trying to bring in a lot more
[34:32] you're trying to bring in a lot more competitive advantage, it's going to
[34:33] competitive advantage, it's going to come when you going to put a lot more
[34:35] come when you going to put a lot more agents together. So it's like a team of
[34:37] agents together. So it's like a team of five working together and if five have
[34:39] five working together and if five have 10 different kinds of capabilities for
[34:42] 10 different kinds of capabilities for you to then decode and say where is the
[34:44] you to then decode and say where is the output coming from and what do I need to
[34:46] output coming from and what do I need to fix into my system for it to do. I think
[34:48] fix into my system for it to do. I think that's where it gets complex. It's less
[34:50] that's where it gets complex. It's less of a limitation of an agent, more of a
[34:53] of a limitation of an agent, more of a predictability of output and for humans
[34:56] predictability of output and for humans to finally go back and debug and fix
[34:58] to finally go back and debug and fix which part of the agent needs needs a
[35:00] which part of the agent needs needs a fix. I think that's the crux of of of uh
[35:04] fix. I think that's the crux of of of uh it again stems from human capability to
[35:06] it again stems from human capability to be able to to
[35:08] be able to to control what the agents will will end up
[35:11] control what the agents will will end up doing. So
[35:13] doing. So define uh what is the specialist role
[35:15] define uh what is the specialist role that you would want your agent to do. Uh
[35:18] that you would want your agent to do. Uh and that is a important choice to have
[35:21] and that is a important choice to have like whenever you think of uh building
[35:23] like whenever you think of uh building it. Do you want multiple agents? Do you
[35:26] it. Do you want multiple agents? Do you want one agent? And and I think that's a
[35:27] want one agent? And and I think that's a that's a very very important decision to
[35:29] that's a very very important decision to make. Okay. So g let's coming back to
[35:31] make. Okay. So g let's coming back to act memory and reasoning. What should be
[35:33] act memory and reasoning. What should be my first step? I've understood that an
[35:35] my first step? I've understood that an agent is supposed to act access memory
[35:39] agent is supposed to act access memory and reason. So now what should be my
[35:42] and reason. So now what should be my first step?
[35:43] first step? >> So when you say act, you have to
[35:45] >> So when you say act, you have to basically tell an agent, define an agent
[35:48] basically tell an agent, define an agent who do you want the agent to operate
[35:50] who do you want the agent to operate like. Let's take an example uh of what
[35:53] like. Let's take an example uh of what we've seen uh the creatives that were
[35:55] we've seen uh the creatives that were generated like in that in that entire
[35:59] generated like in that in that entire system there is one agent who's acting
[36:03] system there is one agent who's acting like a creative director. Now that is
[36:06] like a creative director. Now that is the
[36:08] the constraints or that is the contours in
[36:11] constraints or that is the contours in which that agent is expected to operate.
[36:14] which that agent is expected to operate. So you've asked the agent to operate
[36:16] So you've asked the agent to operate like a creative director a seasoned
[36:18] like a creative director a seasoned creative director who understands
[36:20] creative director who understands consumers who's being able who has
[36:22] consumers who's being able who has probably done a lot of campaigns in the
[36:24] probably done a lot of campaigns in the BFSI sector and has been able to drive
[36:27] BFSI sector and has been able to drive output or bring in the most relative
[36:30] output or bring in the most relative content. That's the kind of a uh uh
[36:33] content. That's the kind of a uh uh expectation or what you want an agent to
[36:36] expectation or what you want an agent to act comes in. So defining what do you
[36:39] act comes in. So defining what do you want the agent to act like is important.
[36:42] want the agent to act like is important. It's also important to define what are
[36:44] It's also important to define what are the KPIs for it. So when you say a KPI,
[36:47] the KPIs for it. So when you say a KPI, you want that agent to give you creative
[36:49] you want that agent to give you creative which will achieve high CTR. Is this the
[36:52] which will achieve high CTR. Is this the structure of a prompt?
[36:54] structure of a prompt? >> Uh at the end of the day, agents are
[36:56] >> Uh at the end of the day, agents are just prompts, right? It's just a
[36:58] just prompts, right? It's just a compilation of of
[37:01] compilation of of a a well-defined prompt which then has
[37:03] a a well-defined prompt which then has probably as necessitates as as necessary
[37:06] probably as necessitates as as necessary will have API connections to to be done
[37:08] will have API connections to to be done and it may have memory which is given
[37:10] and it may have memory which is given from outside the world but the rest is
[37:12] from outside the world but the rest is all within within the prompt that that
[37:14] all within within the prompt that that you expect it to work. It's just the way
[37:17] you expect it to work. It's just the way you how sharply you're able to define
[37:21] you how sharply you're able to define the the action it should take and the
[37:23] the the action it should take and the kind of reasoning that you want it to
[37:25] kind of reasoning that you want it to do. As long as that is clear, you'll
[37:28] do. As long as that is clear, you'll perhaps get a lot more uh probable
[37:32] perhaps get a lot more uh probable output that you know will work for the
[37:35] output that you know will work for the goal that you've put in. People after
[37:37] goal that you've put in. People after the board, I tried to go deep into what
[37:38] the board, I tried to go deep into what Kdar said and I found this research
[37:40] Kdar said and I found this research paper called the lost in the middle. In
[37:42] paper called the lost in the middle. In simple words, when people build agentic
[37:44] simple words, when people build agentic systems, the first instinct is to create
[37:47] systems, the first instinct is to create one super agent and dump a huge list of
[37:50] one super agent and dump a huge list of instructions on it. But that's exactly
[37:52] instructions on it. But that's exactly where things start to break. There is an
[37:53] where things start to break. There is an issue called the lost in the middle
[37:55] issue called the lost in the middle problem. So when you overload an AI with
[37:58] problem. So when you overload an AI with too much context, it tends to miss
[38:00] too much context, it tends to miss important details that sit in the middle
[38:02] important details that sit in the middle of that input. Let's say a model gets a
[38:05] of that input. Let's say a model gets a long context with four documents. One
[38:07] long context with four documents. One about sports, one about finance, one
[38:09] about sports, one about finance, one about travel that actually contains the
[38:11] about travel that actually contains the answer to a question and the last one
[38:13] answer to a question and the last one about health and the question asked to
[38:15] about health and the question asked to the model is where did this person go on
[38:17] the model is where did this person go on a vacation and only the travel document
[38:19] a vacation and only the travel document has that info. Researchers then move
[38:22] has that info. Researchers then move this answer document to the start to the
[38:24] this answer document to the start to the middle and to the end and they measure
[38:26] middle and to the end and they measure the accuracy. Now when it's at the
[38:29] the accuracy. Now when it's at the beginning or at the end the model does
[38:32] beginning or at the end the model does very well. But the researchers found
[38:34] very well. But the researchers found that when this document is in the middle
[38:36] that when this document is in the middle the performance drops. That's the lost
[38:38] the performance drops. That's the lost in the middle effect in action. This is
[38:41] in the middle effect in action. This is why multi-agent systems are not just a
[38:43] why multi-agent systems are not just a fancy architecture choice. They are a
[38:45] fancy architecture choice. They are a necessity. So instead of one generalist
[38:48] necessity. So instead of one generalist agent trying to do everything, we need
[38:50] agent trying to do everything, we need to break the problem into smaller
[38:52] to break the problem into smaller well-defined tasks. Each sub agent has a
[38:55] well-defined tasks. Each sub agent has a specific role with a short clear prompt
[38:58] specific role with a short clear prompt which dramatically improves reliability.
[39:00] which dramatically improves reliability. It's like a team. You wouldn't ask one
[39:02] It's like a team. You wouldn't ask one member to research, write, edit, fact
[39:04] member to research, write, edit, fact check, and design a report all at once
[39:06] check, and design a report all at once with a giant instruction sheet. You
[39:08] with a giant instruction sheet. You would assign specialists for each part.
[39:10] would assign specialists for each part. And AI systems work very well if
[39:12] And AI systems work very well if operated in a similar manner. So K now
[39:14] operated in a similar manner. So K now that I've understood
[39:16] that I've understood what an agent is supposed to do. An
[39:18] what an agent is supposed to do. An agent is supposed to act, access memory
[39:20] agent is supposed to act, access memory and reason, what is step number one?
[39:23] and reason, what is step number one? >> Just define um what is it that you what
[39:27] >> Just define um what is it that you what are the goals that you want the agent to
[39:28] are the goals that you want the agent to really achieve it for you?
[39:30] really achieve it for you? >> Let's say my goal is to get 10,000
[39:32] >> Let's say my goal is to get 10,000 credit card customers. So uh look now uh
[39:36] credit card customers. So uh look now uh if if that is the goal the the first
[39:39] if if that is the goal the the first thing uh that the agent needs to figure
[39:41] thing uh that the agent needs to figure out is for the for 10,000 such consumers
[39:46] out is for the for 10,000 such consumers what is the message that I would want to
[39:48] what is the message that I would want to deliver. So the agent needs to reason
[39:51] deliver. So the agent needs to reason out and identify looking at those 10,000
[39:54] out and identify looking at those 10,000 customers dipping into the the database
[39:57] customers dipping into the the database or the access of data that you've given
[39:59] or the access of data that you've given for for the creative agent to know what
[40:01] for for the creative agent to know what it needs to to look in as information
[40:04] it needs to to look in as information for your consumer. Having identified
[40:06] for your consumer. Having identified that, it will then give a clear brief
[40:10] that, it will then give a clear brief within the within the agent to say now
[40:13] within the within the agent to say now convert this into a script and a video
[40:16] convert this into a script and a video that will help drive persuasion in these
[40:19] that will help drive persuasion in these 10,000 consumers. U we did this for our
[40:23] 10,000 consumers. U we did this for our credit card and when you think of uh uh
[40:27] credit card and when you think of uh uh a usual way uh we would have had one
[40:29] a usual way uh we would have had one output uh one film which would have just
[40:32] output uh one film which would have just uh probably shared out to everyone. Now,
[40:34] uh probably shared out to everyone. Now, interestingly, the creative agent would
[40:37] interestingly, the creative agent would figure out that for these 10,000
[40:39] figure out that for these 10,000 consumers,
[40:41] consumers, it created one-on-one scripts. It
[40:43] it created one-on-one scripts. It created a one-on-one uh videos which are
[40:47] created a one-on-one uh videos which are very very different than what they are
[40:50] very very different than what they are uh uh for you or for me. So, it took
[40:52] uh uh for you or for me. So, it took those 10,000 consumers, looked at what
[40:54] those 10,000 consumers, looked at what segmented it and said that okay, hey,
[40:56] segmented it and said that okay, hey, these 10,000 probably can be in the
[40:58] these 10,000 probably can be in the bunch of five. Each segments are
[41:00] bunch of five. Each segments are different. And someone some some of them
[41:02] different. And someone some some of them are are the ones who would probably look
[41:04] are are the ones who would probably look at budget. Some of them would probably
[41:06] at budget. Some of them would probably want luxury. The others may want to to
[41:09] want luxury. The others may want to to look at experiential travel so on so
[41:11] look at experiential travel so on so forth. And then it converted that into a
[41:12] forth. And then it converted that into a video. Hello Neha. Adventure is about
[41:15] video. Hello Neha. Adventure is about making memories not spending a fortune.
[41:18] making memories not spending a fortune. >> With your Kotak Air Plus credit card,
[41:20] >> With your Kotak Air Plus credit card, earn five air miles per 100 rupees on
[41:22] earn five air miles per 100 rupees on travel, just 2% forex, plus
[41:24] travel, just 2% forex, plus complimentary lounge access worldwide.
[41:26] complimentary lounge access worldwide. apply now and make every journey
[41:29] apply now and make every journey unforgettable.
[41:36] So that is that is probably the
[41:38] So that is that is probably the reasoning piece was to understanding the
[41:40] reasoning piece was to understanding the consumer and the benefits and hence
[41:42] consumer and the benefits and hence which lever to pick around and what to
[41:44] which lever to pick around and what to do. The act part was where you've got
[41:47] do. The act part was where you've got the agent to really make content for you
[41:50] the agent to really make content for you uh and and enacted by creating videos
[41:53] uh and and enacted by creating videos for you. And then finally memory piece
[41:55] for you. And then finally memory piece is where it said okay now if this is for
[42:00] is where it said okay now if this is for air plus then the memory is where it
[42:03] air plus then the memory is where it knew that okay what is the world for air
[42:05] knew that okay what is the world for air plus it should look of a certain color
[42:07] plus it should look of a certain color it should probably talk into a certain
[42:09] it should probably talk into a certain segment it should also have the kind of
[42:12] segment it should also have the kind of benefits that air plus has as a credit
[42:14] benefits that air plus has as a credit card so that's the memory piece you have
[42:17] card so that's the memory piece you have the memory you have the reasoning on on
[42:19] the memory you have the reasoning on on segmenting understanding and the act was
[42:22] segmenting understanding and the act was where it had to just go and create a
[42:23] where it had to just go and create a video for us.
[42:24] video for us. >> So Kedar, my question is if I ask AI to
[42:27] >> So Kedar, my question is if I ask AI to act like a creative guy.
[42:29] act like a creative guy. >> Now this creative guy could be an art
[42:30] >> Now this creative guy could be an art director also. It could be a copy guy
[42:32] director also. It could be a copy guy also.
[42:33] also. >> Would it be too vague? The difference is
[42:35] >> Would it be too vague? The difference is this. When you leave it to open-ended
[42:38] this. When you leave it to open-ended like a creative person, it can go
[42:40] like a creative person, it can go anywhere. It can find a reference of any
[42:43] anywhere. It can find a reference of any creative person, any kind of creativity
[42:46] creative person, any kind of creativity and it'll give you an output because
[42:47] and it'll give you an output because it's still designed to giving an output
[42:49] it's still designed to giving an output versus trying to tell it act like only X
[42:52] versus trying to tell it act like only X person which is also very very
[42:54] person which is also very very restrictive for an agent. So now you
[42:57] restrictive for an agent. So now you have to define and understand how do I
[42:59] have to define and understand how do I go from act like a actor of a specific
[43:03] go from act like a actor of a specific kind or a pattern to letting it decide
[43:06] kind or a pattern to letting it decide what kind of an actor it needs to be and
[43:09] what kind of an actor it needs to be and still be acceptable to what the brand
[43:12] still be acceptable to what the brand has to offer or what the brand world is.
[43:15] has to offer or what the brand world is. I think that is a very very different
[43:17] I think that is a very very different skill. So it's like a coach. You're
[43:20] skill. So it's like a coach. You're letting your team players perform the
[43:23] letting your team players perform the way they want, but you still want them
[43:25] way they want, but you still want them to kind of play in certain condors and
[43:28] to kind of play in certain condors and wait for it to pan out as the game
[43:29] wait for it to pan out as the game happens. The second thing is that
[43:32] happens. The second thing is that because you have within a agent also
[43:35] because you have within a agent also many agents operating root all of the
[43:39] many agents operating root all of the intern agent actions through an
[43:42] intern agent actions through an orchestrator. The third thing is very
[43:44] orchestrator. The third thing is very very important. Um assigning the right
[43:46] very important. Um assigning the right memory type and uh uh think of this
[43:50] memory type and uh uh think of this right like a creative agent needs brand
[43:53] right like a creative agent needs brand history. The creative agent needs what
[43:56] history. The creative agent needs what is your brand world. The creative agent
[43:58] is your brand world. The creative agent needs to know what is the brand
[44:00] needs to know what is the brand positioning statement. What is my design
[44:03] positioning statement. What is my design uh structure? What is my design
[44:05] uh structure? What is my design archetype? All of that that is where
[44:07] archetype? All of that that is where that is where choosing the right memory
[44:09] that is where choosing the right memory becomes extremely important. uh likewise
[44:12] becomes extremely important. uh likewise your entire copywriter needs to know
[44:16] your entire copywriter needs to know what is it that has worked in the past
[44:18] what is it that has worked in the past for me as copies creating the output
[44:20] for me as copies creating the output that I have. So all of this is
[44:21] that I have. So all of this is important. Uh uh uh you have to decide
[44:24] important. Uh uh uh you have to decide on the right kind of memory that you
[44:26] on the right kind of memory that you would want your agent to have. Uh and
[44:28] would want your agent to have. Uh and and enacted by creating videos for you.
[44:34] Oh
[44:37] Oh late
[44:39] late card details
[44:41] card details CV
[44:43] CV cyber criminals
[44:46] cyber criminals fraud payment link CV details
[44:51] fraud payment link CV details payment official website
[44:54] payment official website okay now because you've taught me about
[44:58] okay now because you've taught me about the multi- aent model I will not deploy
[45:00] the multi- aent model I will not deploy one agent for writing I will deploy
[45:03] one agent for writing I will deploy multiple sub agents and then connect
[45:06] multiple sub agents and then connect them to the CEO the lead writer. These
[45:09] them to the CEO the lead writer. These sub agents will be allocated the
[45:11] sub agents will be allocated the following tasks. Agent number one will
[45:14] following tasks. Agent number one will be topic scouter.
[45:16] be topic scouter. >> Mhm.
[45:17] >> Mhm. >> I will tell the topic scouter that think
[45:20] >> I will tell the topic scouter that think school is a business school on the
[45:22] school is a business school on the internet. We make geopolitical,
[45:25] internet. We make geopolitical, economical, political and business case
[45:28] economical, political and business case studies. Then I will give the topic
[45:31] studies. Then I will give the topic scouter.
[45:32] scouter. >> Access to Twitter.
[45:33] >> Access to Twitter. >> Yeah.
[45:34] >> Yeah. >> Access to my YouTube comments data.
[45:36] >> Access to my YouTube comments data. >> Yeah.
[45:38] >> Yeah. >> Access to YouTube trends. Maybe Economic
[45:40] >> Access to YouTube trends. Maybe Economic Times, Hindustan Times and several other
[45:42] Times, Hindustan Times and several other newspapers.
[45:44] newspapers. >> And maybe some international news
[45:47] >> And maybe some international news channels like BBC, ground news and so on
[45:50] channels like BBC, ground news and so on and so forth.
[45:51] and so forth. Once that is done,
[45:53] Once that is done, >> then I will ask the topic scouter to
[45:55] >> then I will ask the topic scouter to find the most relevant subject according
[45:57] find the most relevant subject according to the Twitter trend today or the Google
[45:59] to the Twitter trend today or the Google trend today so that I can get maximum
[46:02] trend today so that I can get maximum viewership and I will also give it a
[46:04] viewership and I will also give it a constraint that it must only pick topics
[46:07] constraint that it must only pick topics that might possibly have a oneweek media
[46:09] that might possibly have a oneweek media cycle so that I have time for
[46:10] cycle so that I have time for production. Am I thinking right over
[46:12] production. Am I thinking right over here?
[46:14] here? the way you've divided the scope and
[46:17] the way you've divided the scope and looking at specialized agents within the
[46:20] looking at specialized agents within the system you've reduced the scope of
[46:22] system you've reduced the scope of error. Perfect. Then I will place an
[46:25] error. Perfect. Then I will place an orchestrator in between. This
[46:27] orchestrator in between. This orchestrator will understand all the
[46:29] orchestrator will understand all the titles suggested by the topic scouter
[46:32] titles suggested by the topic scouter and then evaluate whether that title is
[46:34] and then evaluate whether that title is relevant for thingschool or not. Only if
[46:36] relevant for thingschool or not. Only if it is relevant, it will then pass on
[46:38] it is relevant, it will then pass on this title to a research agent.
[46:41] this title to a research agent. >> Is this correct? So orchestrator between
[46:43] >> Is this correct? So orchestrator between two agents. Once this data is received
[46:46] two agents. Once this data is received by the research agent, this research
[46:48] by the research agent, this research agent will then have again access to the
[46:51] agent will then have again access to the news channels, all the research
[46:53] news channels, all the research documents like WH articles or WHO
[46:56] documents like WH articles or WHO research papers, market research papers
[46:58] research papers, market research papers or World Bank research papers using
[47:00] or World Bank research papers using which it will then collate data sets and
[47:05] which it will then collate data sets and stories that can then be written into a
[47:08] stories that can then be written into a YouTube content piece. Then after this
[47:10] YouTube content piece. Then after this process is done, it will then go to the
[47:12] process is done, it will then go to the copywriter.
[47:14] copywriter. >> Yeah.
[47:14] >> Yeah. >> Who will then write the piece of content
[47:16] >> Who will then write the piece of content in the most lucrative way possible with
[47:18] in the most lucrative way possible with the right hooks, with the right
[47:20] the right hooks, with the right storytelling, with the right analogies.
[47:21] storytelling, with the right analogies. And again here it'll have access to all
[47:23] And again here it'll have access to all my scripts and maybe my communication
[47:24] my scripts and maybe my communication masterass where I've taught all these
[47:26] masterass where I've taught all these frameworks. And after it writes
[47:29] frameworks. And after it writes everything properly, it will then go to
[47:30] everything properly, it will then go to a compliance agent maybe again to check
[47:32] a compliance agent maybe again to check if everything has been written properly.
[47:34] if everything has been written properly. After this is done, the output is
[47:36] After this is done, the output is prepared and then the output is given to
[47:38] prepared and then the output is given to the lead writer who does all the quality
[47:40] the lead writer who does all the quality check and after the quality check is
[47:42] check and after the quality check is done, it then goes to the editor.
[47:44] done, it then goes to the editor. >> Why do you need to do it?
[47:45] >> Why do you need to do it? >> But then what's the purpose of the lead
[47:47] >> But then what's the purpose of the lead writer? What's the purpose of the CEO?
[47:49] writer? What's the purpose of the CEO? >> Do you need one is a question that you
[47:51] >> Do you need one is a question that you need to ask. The moment you are trying
[47:52] need to ask. The moment you are trying to create a system,
[47:55] to create a system, what we have learned is that it has
[47:57] what we have learned is that it has changed the process. it has changed the
[47:59] changed the process. it has changed the flow because it doesn't operate like how
[48:02] flow because it doesn't operate like how you usually think as humans uh do. So
[48:06] you usually think as humans uh do. So exactly the question of do I need this
[48:08] exactly the question of do I need this now because if it is already compliant
[48:10] now because if it is already compliant if it has been asked to write something
[48:12] if it has been asked to write something which is always going to be compliant
[48:14] which is always going to be compliant then I don't need a compliant check
[48:16] then I don't need a compliant check after that maybe then it will do a
[48:18] after that maybe then it will do a viewership check whether this content
[48:20] viewership check whether this content piece will generate enough viewership or
[48:22] piece will generate enough viewership or not and if it doesn't cross a certain
[48:23] not and if it doesn't cross a certain threshold then maybe the loop can start
[48:25] threshold then maybe the loop can start all over again where it will then ask
[48:27] all over again where it will then ask the topic scouter to find another title
[48:30] the topic scouter to find another title and then carry out the entire process.
[48:32] and then carry out the entire process. >> Yes. Now this is exactly the way it
[48:36] >> Yes. Now this is exactly the way it behaves for us when we are making
[48:38] behaves for us when we are making creatives. It comes with a score to tell
[48:41] creatives. It comes with a score to tell us whether this creative is going to
[48:44] us whether this creative is going to give you the right the CTRs the
[48:46] give you the right the CTRs the historical CTRs is going to be better or
[48:48] historical CTRs is going to be better or not. So it's a it's a it's more of a
[48:50] not. So it's a it's a it's more of a probabilistic
[48:52] probabilistic output that comes along with the
[48:55] output that comes along with the creative output that it has. So it will
[48:58] creative output that it has. So it will reduce the errors that one would have
[49:00] reduce the errors that one would have otherwise had. So to your point, yes,
[49:03] otherwise had. So to your point, yes, you could always have a prediction come
[49:06] you could always have a prediction come to you on whether this episode is going
[49:08] to you on whether this episode is going to be viewed or not, but it's always
[49:11] to be viewed or not, but it's always going to be built on the past. Guys,
[49:14] going to be built on the past. Guys, it's simplify. You see, when you build
[49:17] it's simplify. You see, when you build an agent, you have to keep three
[49:19] an agent, you have to keep three important things in mind. Let's say you
[49:21] important things in mind. Let's say you are building a customer support agent
[49:23] are building a customer support agent with perplexity skills. Now, you can't
[49:25] with perplexity skills. Now, you can't just say, "Hello, just handle my support
[49:28] just say, "Hello, just handle my support emails perplexity." You have to feed the
[49:30] emails perplexity." You have to feed the agent with as much context as you would
[49:32] agent with as much context as you would provide to an intern. So for a customer
[49:35] provide to an intern. So for a customer support agent, you have to design it
[49:37] support agent, you have to design it with three very specific layers which
[49:40] with three very specific layers which you would anyways explain to a customer
[49:42] you would anyways explain to a customer support agent as in a human customer
[49:44] support agent as in a human customer support agent. So firstly, you need to
[49:47] support agent. So firstly, you need to define very clearly as to what counts as
[49:49] define very clearly as to what counts as a bug and what counts as a feature
[49:52] a bug and what counts as a feature request. So if you're using a CRM or if
[49:55] request. So if you're using a CRM or if you're automating customer support, you
[49:57] you're automating customer support, you have to do for the agent exactly what
[49:59] have to do for the agent exactly what you would do for a human. So you would
[50:01] you would do for a human. So you would spell out what is P1, P2, P3 and P4
[50:05] spell out what is P1, P2, P3 and P4 query. In other words, you will tell the
[50:07] query. In other words, you will tell the agent what is truly urgent, what can
[50:10] agent what is truly urgent, what can wait, what is nice to have and what is
[50:12] wait, what is nice to have and what is absolutely irrelevant. The agent cannot
[50:15] absolutely irrelevant. The agent cannot invent this logic by itself. You have to
[50:18] invent this logic by itself. You have to tell it to the agent. Secondly, you need
[50:20] tell it to the agent. Secondly, you need decision rules for confusion and
[50:22] decision rules for confusion and prioritization. So the agent must know
[50:25] prioritization. So the agent must know if two emails talk about the same issue,
[50:27] if two emails talk about the same issue, how do I avoid double logging it? If one
[50:30] how do I avoid double logging it? If one email is a critical outage and another
[50:32] email is a critical outage and another email is a UI suggestion, which one do I
[50:35] email is a UI suggestion, which one do I escalate first and which one do I send
[50:37] escalate first and which one do I send to a human in the loop? These conditions
[50:40] to a human in the loop? These conditions are what prevent the AI from mixing up
[50:42] are what prevent the AI from mixing up threads, spamming tickets or escalating
[50:44] threads, spamming tickets or escalating the wrong things. Thirdly, you should
[50:47] the wrong things. Thirdly, you should wire it into your existing tools using
[50:50] wire it into your existing tools using connectors. For example, you can connect
[50:53] connectors. For example, you can connect Perplexity with systems like Zohoesk,
[50:56] Perplexity with systems like Zohoesk, Gmail or ODU CRM so that the agent
[50:59] Gmail or ODU CRM so that the agent doesn't just answer questions in theory.
[51:01] doesn't just answer questions in theory. It actually reads real tickets,
[51:03] It actually reads real tickets, classifies them, updates your support
[51:05] classifies them, updates your support tool and hands edge cases only to
[51:08] tool and hands edge cases only to humans. That is how you move from a toy
[51:10] humans. That is how you move from a toy demo to a real workflow. So, please try
[51:13] demo to a real workflow. So, please try this out on Proplexity. And look at
[51:14] this out on Proplexity. And look at this. If you want to build an AI agent
[51:16] this. If you want to build an AI agent for customer support with perplexity
[51:17] for customer support with perplexity skills, the recipe is very simple but it
[51:19] skills, the recipe is very simple but it is very strict. Number one, you have to
[51:21] is very strict. Number one, you have to use connectors to give it access to CRM
[51:23] use connectors to give it access to CRM like Zohoesk, ODO CRM, Gmail or whatever
[51:27] like Zohoesk, ODO CRM, Gmail or whatever support stack you have. Secondly, you
[51:29] support stack you have. Secondly, you need to be extremely specific with
[51:30] need to be extremely specific with instructions like I mentioned before.
[51:32] instructions like I mentioned before. And thirdly, you need to start from a
[51:34] And thirdly, you need to start from a solid base instruction set and then you
[51:37] solid base instruction set and then you have to keep on refining and layering
[51:39] have to keep on refining and layering more and more rules as you see real
[51:41] more and more rules as you see real world edge cases. And the more precise
[51:43] world edge cases. And the more precise you are with these instructions, the
[51:45] you are with these instructions, the more reliable the agent becomes. And
[51:47] more reliable the agent becomes. And over time, you can move from an AI that
[51:49] over time, you can move from an AI that sometimes helps to an AI that quietly
[51:52] sometimes helps to an AI that quietly runs most of your support in the
[51:53] runs most of your support in the background. That is how you give the AI
[51:56] background. That is how you give the AI all the instructions, all the access and
[51:58] all the instructions, all the access and then keep improving on the basis of its
[52:00] then keep improving on the basis of its output and use cases. Now it brings you
[52:02] output and use cases. Now it brings you bring us to a very interesting point and
[52:04] bring us to a very interesting point and I was I think having a conversation with
[52:06] I was I think having a conversation with Mahesh on this and we realized that
[52:10] Mahesh on this and we realized that one of the biggest thing that's going to
[52:12] one of the biggest thing that's going to be important is subject matter
[52:14] be important is subject matter expertise. There was a era where general
[52:17] expertise. There was a era where general managers generalists were really the
[52:21] managers generalists were really the ones in demand and subject matter
[52:24] ones in demand and subject matter expertise could only grow to a certain
[52:26] expertise could only grow to a certain level in organizations. The there's
[52:29] level in organizations. The there's going to be huge trend reversal that's
[52:30] going to be huge trend reversal that's going to happen. the subject matter
[52:32] going to happen. the subject matter expertise is going to be more important
[52:34] expertise is going to be more important in the next 5 to 10 years because the
[52:37] in the next 5 to 10 years because the subject matter experts are the ones who
[52:39] subject matter experts are the ones who would know whether the AI agent is
[52:40] would know whether the AI agent is giving you the right output or not. Now
[52:42] giving you the right output or not. Now in this case a writer should be able to
[52:46] in this case a writer should be able to understand why the AI agent is giving me
[52:49] understand why the AI agent is giving me the output the way it is and should be
[52:51] the output the way it is and should be able to debug challenge the AI agent and
[52:54] able to debug challenge the AI agent and get the right output out of an AI agent.
[52:56] get the right output out of an AI agent. where most organizations think we can
[52:58] where most organizations think we can just fire all the experts, get the AI
[53:00] just fire all the experts, get the AI agent to do the job and maybe have an
[53:02] agent to do the job and maybe have an amateur handle the entire operations.
[53:05] amateur handle the entire operations. >> No, I think the the the
[53:08] >> No, I think the the the subject matter experts meets a great
[53:11] subject matter experts meets a great tech guy is where disruption is going to
[53:14] tech guy is where disruption is going to happen. Um because like I said, you will
[53:18] happen. Um because like I said, you will have an agent give you an output. the
[53:19] have an agent give you an output. the ability to know and challenge it needs
[53:22] ability to know and challenge it needs better understanding than an IQ level of
[53:25] better understanding than an IQ level of 150 160. For that you need to be good in
[53:28] 150 160. For that you need to be good in your in your own domains and they're
[53:30] your in your own domains and they're going to be back likememes are going to
[53:32] going to be back likememes are going to be in huge demand.
[53:34] be in huge demand. >> So once this is done can I just deploy
[53:35] >> So once this is done can I just deploy the agent?
[53:37] the agent? >> Uh can you run a marathon on the very
[53:40] >> Uh can you run a marathon on the very first day you decide to run a marathon?
[53:42] first day you decide to run a marathon? >> No. But this should be easier than
[53:44] >> No. But this should be easier than running a marathon right?
[53:45] running a marathon right? >> Well not really. uh because uh like I
[53:49] >> Well not really. uh because uh like I said when you talk to a high IQ person
[53:53] said when you talk to a high IQ person and ask a question the person may give
[53:55] and ask a question the person may give you an answer and and yeah maybe 70% of
[53:57] you an answer and and yeah maybe 70% of the times it's right but is it right in
[53:59] the times it's right but is it right in the context in which you operate you
[54:02] the context in which you operate you don't have an answer you don't know so
[54:04] don't have an answer you don't know so this is like getting the high IQ person
[54:08] this is like getting the high IQ person to operate in the world that you are in
[54:10] to operate in the world that you are in and making sure that the probability is
[54:12] and making sure that the probability is right
[54:12] right >> so Kar my last question to you is
[54:14] >> so Kar my last question to you is considering the advancement in agentic
[54:16] considering the advancement in agentic AI
[54:18] AI Should we be scared or excited? And
[54:20] Should we be scared or excited? And who's going to lose their job?
[54:22] who's going to lose their job? >> See, in 1970s,
[54:24] >> See, in 1970s, there were these ATM machines that got
[54:26] there were these ATM machines that got launched. And the obvious thing was that
[54:30] launched. And the obvious thing was that a lot of the tellers will lose their
[54:32] a lot of the tellers will lose their job.
[54:33] job. From 1970 to now, there are more
[54:37] From 1970 to now, there are more branches, a lot more physical branches
[54:40] branches, a lot more physical branches that are there, a lot more ATMs, and
[54:44] that are there, a lot more ATMs, and there are still a lot more tellers that
[54:46] there are still a lot more tellers that are there inside the bank. The
[54:49] are there inside the bank. The difference is tellers earlier would only
[54:53] difference is tellers earlier would only do one job of just taking cash, making a
[54:56] do one job of just taking cash, making a note and to now they're able to do a lot
[55:00] note and to now they're able to do a lot more. They don't just collect cash but
[55:03] more. They don't just collect cash but they are also understanding the
[55:05] they are also understanding the consumer's problem as the consumer
[55:07] consumer's problem as the consumer enters the bank. They're also able to go
[55:10] enters the bank. They're also able to go a lot more deeper in the way they are
[55:12] a lot more deeper in the way they are building the relationship. So netnet AI
[55:15] building the relationship. So netnet AI agents will follow the same pattern.
[55:17] agents will follow the same pattern. Most roles don't disappear, they just
[55:20] Most roles don't disappear, they just transform. So it's going to be a lot
[55:24] transform. So it's going to be a lot more
[55:26] more lot more harder problems to be solved
[55:28] lot more harder problems to be solved than trying to to solve simpler things.
[55:32] than trying to to solve simpler things. There are three layers of work as you
[55:34] There are three layers of work as you think right there is an execution work,
[55:35] think right there is an execution work, there's a coordination work and a
[55:36] there's a coordination work and a judgment work.
[55:37] judgment work. >> And when you think of these three
[55:38] >> And when you think of these three layers, execution work is repetitive
[55:40] layers, execution work is repetitive task, data entry, report building,
[55:43] task, data entry, report building, scheduling, all of that. You have
[55:45] scheduling, all of that. You have coordination work which is basically
[55:46] coordination work which is basically managing workflows, handoffs between
[55:48] managing workflows, handoffs between teams, cross functional uh dependencies
[55:50] teams, cross functional uh dependencies so on so and the last work is basically
[55:52] so on so and the last work is basically the judgment work the strategy the
[55:54] the judgment work the strategy the creativity decisions that require human
[55:56] creativity decisions that require human wisdom and empathy.
[55:57] wisdom and empathy. >> Now these are three broad contours. I
[56:00] >> Now these are three broad contours. I just want you to to to look at it. AI
[56:02] just want you to to to look at it. AI agents may replace the first two layers
[56:05] agents may replace the first two layers which is the execution work and the
[56:07] which is the execution work and the coordination work. Uh they're very good
[56:09] coordination work. Uh they're very good at execution and coordination. So
[56:11] at execution and coordination. So judgment work the top layer work becomes
[56:15] judgment work the top layer work becomes the place where most of us will then end
[56:18] the place where most of us will then end up spending a lot of time. I just want
[56:19] up spending a lot of time. I just want you to to look at the or chart changes
[56:22] you to to look at the or chart changes that are there right uh look at the left
[56:24] that are there right uh look at the left one that's today's organization the
[56:27] one that's today's organization the right one is where you you see the
[56:28] right one is where you you see the tomorrow's organization right so if you
[56:30] tomorrow's organization right so if you map it the senior leadership in today's
[56:32] map it the senior leadership in today's world does strategy in tomorrow's
[56:35] world does strategy in tomorrow's organization the senior leadership would
[56:37] organization the senior leadership would be doing strategy plus system design
[56:39] be doing strategy plus system design like how do you really design an EI
[56:41] like how do you really design an EI system is going to be a very very
[56:43] system is going to be a very very important job profile that that you
[56:45] important job profile that that you would start looking at from the senior
[56:46] would start looking at from the senior leadership the middle management today
[56:48] leadership the middle management today does a a lot of coordination.
[56:51] does a a lot of coordination. The way it'll evolve is you'll have to
[56:54] The way it'll evolve is you'll have to oversee and find the the right ways in
[56:57] oversee and find the the right ways in which the agentic systems are working.
[56:59] which the agentic systems are working. So that's how you'll see the
[57:01] So that's how you'll see the mid-management managers, senior managers
[57:03] mid-management managers, senior managers kind of evolving in ensuring
[57:05] kind of evolving in ensuring predictability about the output in the
[57:07] predictability about the output in the system that you built. And lastly, the
[57:09] system that you built. And lastly, the teams that do execution work today will
[57:11] teams that do execution work today will have to be looking at probably doing
[57:15] have to be looking at probably doing what the managers are doing. So they are
[57:17] what the managers are doing. So they are getting pushed to the to the mid and the
[57:19] getting pushed to the to the mid and the top level and that's where you see
[57:21] top level and that's where you see agents doing an execution and
[57:22] agents doing an execution and coordination. So that's one area where
[57:25] coordination. So that's one area where perhaps you'll see agents doing a lot
[57:27] perhaps you'll see agents doing a lot better. Uh but the rest you're
[57:30] better. Uh but the rest you're upskilling. All right guys, with Kar
[57:32] upskilling. All right guys, with Kar laying out the first principles, we've
[57:33] laying out the first principles, we've covered the ideation part of multi- aent
[57:35] covered the ideation part of multi- aent systems for marketing. And in the next
[57:36] systems for marketing. And in the next episodes, we will go deeper into text
[57:39] episodes, we will go deeper into text tax and see how to build a million
[57:41] tax and see how to build a million dollar sales engine with AI agents. So
[57:43] dollar sales engine with AI agents. So this episode might be a little heavy but
[57:45] this episode might be a little heavy but stay with me because it'll be absolutely
[57:47] stay with me because it'll be absolutely game-changing for you in the next few
[57:49] game-changing for you in the next few episodes and in the coming episodes I
[57:51] episodes and in the coming episodes I will also get an to give you a live demo
[57:54] will also get an to give you a live demo of how multi- aent systems work.
[57:57] of how multi- aent systems work. Beautiful. This is quite exhaustive.
[57:59] Beautiful. This is quite exhaustive. Thank you so much Keddar. This has been
[58:01] Thank you so much Keddar. This has been wonderful. I'll just quickly summarize
[58:02] wonderful. I'll just quickly summarize all of our learning so that in case if I
[58:04] all of our learning so that in case if I miss something you can correct me.
[58:06] miss something you can correct me. Firstly we learned the difference
[58:07] Firstly we learned the difference between automation and agents. You
[58:10] between automation and agents. You explained that automation is like a
[58:12] explained that automation is like a vending machine. You press one, you get
[58:14] vending machine. You press one, you get milk. You press two, you get bread. You
[58:16] milk. You press two, you get bread. You press three, you get an ice cream. If
[58:18] press three, you get an ice cream. If you press two and there is no milk, the
[58:19] you press two and there is no milk, the vending machine will simply tell you
[58:21] vending machine will simply tell you that there is no milk. So, you go back
[58:23] that there is no milk. So, you go back home disappointed. But if the same task
[58:25] home disappointed. But if the same task were to be done by an AI agent, the
[58:28] were to be done by an AI agent, the agent will be smart enough to reason and
[58:30] agent will be smart enough to reason and tell you that it doesn't have milk, but
[58:33] tell you that it doesn't have milk, but it can maybe get it delivered to your
[58:35] it can maybe get it delivered to your house because the primary objective of
[58:38] house because the primary objective of that agent will be to double the
[58:40] that agent will be to double the business by maybe achieving an NPS of 9.
[58:43] business by maybe achieving an NPS of 9. And it will do everything in its
[58:44] And it will do everything in its capacity to achieve an NPS of 9 and
[58:47] capacity to achieve an NPS of 9 and double the business. So agents are meant
[58:49] double the business. So agents are meant to achieve a certain objective. Whereas
[58:52] to achieve a certain objective. Whereas automations, they're just meant to do a
[58:54] automations, they're just meant to do a certain task. That's right.
[58:55] certain task. That's right. >> If the task is done, great. If there's a
[58:58] >> If the task is done, great. If there's a hindrance in between, the task will not
[59:00] hindrance in between, the task will not be done. Then you spoke about the five
[59:03] be done. Then you spoke about the five things that an agent does. And you
[59:06] things that an agent does. And you mentioned it in five steps. What does an
[59:08] mentioned it in five steps. What does an agent do? An agent perceives, reasons,
[59:11] agent do? An agent perceives, reasons, acts, execute tasks, evaluates. If you
[59:14] acts, execute tasks, evaluates. If you start using an automation and agent
[59:16] start using an automation and agent today, the automation will be exactly
[59:18] today, the automation will be exactly where it is today even 100 days later.
[59:20] where it is today even 100 days later. >> Whereas an agent will become smarter and
[59:24] >> Whereas an agent will become smarter and as you keep using the agent more and
[59:26] as you keep using the agent more and more, the agent will become a better
[59:28] more, the agent will become a better version of itself.
[59:29] version of itself. >> Now if this were to be distributed
[59:31] >> Now if this were to be distributed amongst 10,000 customers, the agent will
[59:35] amongst 10,000 customers, the agent will also be able to evaluate the result from
[59:37] also be able to evaluate the result from the campaign and then adapt and then
[59:40] the campaign and then adapt and then maybe come out with a better iteration
[59:41] maybe come out with a better iteration next time on the basis of the existing
[59:43] next time on the basis of the existing results. The one thing to note over here
[59:45] results. The one thing to note over here is that your agent is able to do a great
[59:48] is that your agent is able to do a great job because it has market insights,
[59:50] job because it has market insights, historical data and your consumer
[59:52] historical data and your consumer behavior data because of which it is
[59:54] behavior data because of which it is able to make great creatives. So data is
[59:57] able to make great creatives. So data is the new oil and it's going to remain for
[59:59] the new oil and it's going to remain for the next 50 years because the more data
[01:00:01] the next 50 years because the more data you have, the smarter agents you'll be
[01:00:03] you have, the smarter agents you'll be able to build. tomorrow if there's a new
[01:00:05] able to build. tomorrow if there's a new bank that comes out if it doesn't have
[01:00:08] bank that comes out if it doesn't have as much data as KOTAC they won't be able
[01:00:10] as much data as KOTAC they won't be able to build extraordinarily smart agents as
[01:00:12] to build extraordinarily smart agents as easily as Kotak I just I would add there
[01:00:15] easily as Kotak I just I would add there data is the currency but you also need
[01:00:17] data is the currency but you also need to pour in the subject matter expertise
[01:00:20] to pour in the subject matter expertise into an agent for it to behave in the in
[01:00:23] into an agent for it to behave in the in the way you would want it 90% of the
[01:00:25] the way you would want it 90% of the times
[01:00:25] times >> then you mentioned that in between two
[01:00:27] >> then you mentioned that in between two agents you need to have an orchestrator
[01:00:30] agents you need to have an orchestrator for example you gave the analogy of a
[01:00:33] for example you gave the analogy of a general physician
[01:00:34] general physician who acts as the orchestrator before you
[01:00:36] who acts as the orchestrator before you go to a specialist like maybe a heart
[01:00:38] go to a specialist like maybe a heart surgeon. So you need to have an
[01:00:39] surgeon. So you need to have an orchestration layer in between so that
[01:00:42] orchestration layer in between so that while your information travels from one
[01:00:43] while your information travels from one agent to the other, it is properly
[01:00:45] agent to the other, it is properly orchestrated. Then you have to assign
[01:00:47] orchestrated. Then you have to assign the right memory. So there are three
[01:00:48] the right memory. So there are three types of memories. Working memory,
[01:00:50] types of memories. Working memory, episodic memory and semantic memory.
[01:00:52] episodic memory and semantic memory. Then you need to wire governance. For
[01:00:54] Then you need to wire governance. For example, when you asked your agent to
[01:00:56] example, when you asked your agent to make an affluent creative for your South
[01:01:01] make an affluent creative for your South Indian audience, it actually used Mahesh
[01:01:03] Indian audience, it actually used Mahesh Babu's face. Now, that was not allowed,
[01:01:05] Babu's face. Now, that was not allowed, but it was able to do it because you had
[01:01:08] but it was able to do it because you had not mentioned the guidelines properly.
[01:01:10] not mentioned the guidelines properly. This is where it is important to attach
[01:01:13] This is where it is important to attach constraints to your agent so that the
[01:01:16] constraints to your agent so that the agent doesn't do anything that might
[01:01:19] agent doesn't do anything that might backfire. In this case, it could be the
[01:01:22] backfire. In this case, it could be the interest rates of 30%.
[01:01:24] interest rates of 30%. >> Or the usage of Mahesh Babu's face.
[01:01:27] >> Or the usage of Mahesh Babu's face. >> So the compliance and quality checks are
[01:01:28] >> So the compliance and quality checks are important.
[01:01:29] important. >> Compliance and quality checks are
[01:01:30] >> Compliance and quality checks are important. Here's where you need to wire
[01:01:32] important. Here's where you need to wire governance into your agent. And lastly,
[01:01:34] governance into your agent. And lastly, you need to assign the right API so that
[01:01:36] you need to assign the right API so that the agent has access to the right tools
[01:01:39] the agent has access to the right tools and information to be able to execute
[01:01:40] and information to be able to execute the task properly. For example, if it's
[01:01:41] the task properly. For example, if it's an accounting agent, it better have
[01:01:43] an accounting agent, it better have access to your CRM, your bank account,
[01:01:45] access to your CRM, your bank account, and your accounting software.
[01:01:46] and your accounting software. >> The tools. Yes.
[01:01:48] >> The tools. Yes. Now if you don't follow this process
[01:01:51] Now if you don't follow this process what will happen either there'll be a
[01:01:54] what will happen either there'll be a context collapse or there'll be quality
[01:01:56] context collapse or there'll be quality dilution or it will start hallucinating
[01:01:59] dilution or it will start hallucinating or there'll be too many errors and
[01:02:03] or there'll be too many errors and there'll be a governance blind spot
[01:02:04] there'll be a governance blind spot which could backfire. So it is very
[01:02:06] which could backfire. So it is very important that you follow these steps so
[01:02:09] important that you follow these steps so that your agent doesn't go rogue. Yes, I
[01:02:12] that your agent doesn't go rogue. Yes, I think you need to challenge the agent to
[01:02:15] think you need to challenge the agent to be able to give you the right output and
[01:02:17] be able to give you the right output and keep raising the bar. For that, you
[01:02:19] keep raising the bar. For that, you really need to know your own subject
[01:02:21] really need to know your own subject really deeply. While deciding between
[01:02:24] really deeply. While deciding between building and buying, ask yourself, could
[01:02:27] building and buying, ask yourself, could this be my moat? If that task or that
[01:02:29] this be my moat? If that task or that process is your moat, don't give it out.
[01:02:32] process is your moat, don't give it out. Build it inhouse.
[01:02:34] Build it inhouse. If you think it's an if you think it's a
[01:02:36] If you think it's an if you think it's a commoditized process, you better buy it.
[01:02:39] commoditized process, you better buy it. That's it. Ask four questions. Is this
[01:02:42] That's it. Ask four questions. Is this how we win? Do we have unique data? How
[01:02:45] how we win? Do we have unique data? How fast do we need it? And can we maintain
[01:02:47] fast do we need it? And can we maintain it? And regardless of whether you buy or
[01:02:49] it? And regardless of whether you buy or build, you should be able to maintain
[01:02:51] build, you should be able to maintain it. So every single person who's
[01:02:54] it. So every single person who's watching this episode either needs to
[01:02:55] watching this episode either needs to become that AI hire or needs to do some
[01:02:58] become that AI hire or needs to do some AI hiring.
[01:02:59] AI hiring. >> Does that summarize our conversation,
[01:03:00] >> Does that summarize our conversation, Kedar?
[01:03:01] Kedar? >> Yeah, it does. I think uh it it if it
[01:03:04] >> Yeah, it does. I think uh it it if it lands this point.
[01:03:05] lands this point. >> Thank you so much, Kedar. This was
[01:03:06] >> Thank you so much, Kedar. This was wonderful.
[01:03:08] wonderful. Heat. Heat.

Cite this page

If you're using ChatGPT, Claude, Gemini, or another AI assistant, paste this URL into the chat:

https://youtube-transcript.ai/docs/ultimate-ai-agents-masterclass-for-founders-marketers-rethin-ritm0pfbci

The full transcript and summary on this page will be retrieved as context, so the assistant can answer questions about the video accurately.