Amdocs (Generative AI)

March 5, 2024

Corporate Speakers

  • Matt Smith; Amdocs; Head of IR
  • Anthony Goonetilleke; Amdocs; Group President of Technology and Head of Strategy

Participants

  • Tal Liani; Bank of America; Analyst

PRESENTATION

Matt Smith^ Thanks, Bella. Hi, everyone. I'm Matt Smith, Head of Investor Relations for Amdocs, and welcome to today's webinar, the aim of which is to recap and go behind some of the buzz coming out of the Mobile World Congress, which took place in Barcelona last week. Before we get going, just a reminder that today's discussion will include some forward-looking statements which are subject to the risks and uncertainties outlined in the SEC, in our SEC filings.

We may also reference some financial measures that are non-GAAP in nature, and you can find the reconciliation of those measures to the most comparable GAAP revisions located on the Amdocs IR website and also in our 6K filings with the SEC. So with that, let me introduce our host for today. I'd like to welcome Tal Liani, who's a Data Networking and Cybersecurity Analyst at Bank of America Equity Research. And Tal's going to be joined by Anthony Goonetilleke, who's the Group President of Technology and Head of Strategy for Amdocs.

Tal and Anthony are both freshly back from a long week at Mobile World Congress, and they have a lot of great insights to share. So with that, I'm very happy to hand everything over to Tal, and Tal, the floor is yours.

QUESTIONS AND ANSWERS

Tal Liani^ Hey, thanks, Matt.

Thanks, Anthony, for doing this.

Anthony Goonetilleke^ Thank you. Good to be here.

Tal Liani^ Yes, Anthony, it was a great week last week, a lot of vibe, like the vibe was great. And you've been to many Mobile World Congress shows over the years. What was new this time? What was new and interesting, and how can you describe, you know,

what do you describe as attendance, vibe, level of excitement, you know, where is the focus this year versus previous years?

Anthony Goonetilleke^ Yes, look, I think where, you know, Mobile World Congress is definitely back. I think the vibe was good, as you said. I think it was pretty much crowded everywhere. You know, my litmus test is all the restaurants in Spain, you know, they seem to be pretty, pretty booked every night. But look, I think it was very, very hard to go past any stall, any booth, any floor without talking about AI. I think I call it, you know, a sprinkle of AI.

Everything, everyone, I mean, this was the hot topic.

And you know, there's a lot of excitement. There's a lot of buzz around the world. So I think rightly so. Of course, you have your main themes like connectivity, which still, you know, is pervasive at a conference like this, because that's what everyone's there to talk about at the end of the day.

Interestingly, you know, we're starting to talk about, I think I had an interview with a magazine and they wanted to know about 6G and what we're thinking about 6G. So we're thinking about connectivity, but maybe a little bit differently this time. I think we're, you know, I call it ubiquitous connectivity, meaning like, hey, like I don't care what it is, just connect me, just give me the best speed, whether it be LTE, 5G, 6G, fiber, whatever it is, just give me the best connectivity.

So I think connectivity was still pervasive. I think there were some very interesting themes, maybe kind of on the peripheral, which I thought was very important as well.

Security continues to be very core focus. So when you think about things like, you know, how do you secure your data, not just, you know, as you're transferring it, but also in memory on CPUs. So there were a lot of discussions about that. I know there was some kind of legislative discussions in the last couple of days where even the government was thinking about how do you secure data in memory and things like that.

So I think as, you know, communications industry, I think a lot of people are very focused on this because, you know, you're held to a much higher standard, I would say. And of course monetization, you know, at the end of the day, all the vendors there, all of the companies, the service providers were about monetization. So how do you continue to monetize the services? How do you continue to monetize connectivity? There's a lot of CapEx investment that has gone into fiber rollout that has gone into 5G. How do you monetize these?

So, you know, of course there were discussions such as network exposure and things like that.

But I think predominantly, if I had to say one thing, I think it was mainly around AI and generative AI. And that would, I would say it was the big theme.

Tal Liani^ Got it. And specific to Amdocs, what are the key themes? What's the message that you had to investors and customers? And what announcements did you make that you think it's worth highlighting?

Anthony Goonetilleke^ Yes, I think a few things. Clearly all of our customers, every meeting, and we had, you know, probably a thousand plus meetings with various customers. And, you know, it's all a great opportunity for us because we get to meet with, you know, all of the C-level of all of our customers from around the globe. So, you know, for at least for the Amdocs management team, it's a really nice place to get to meet and interact with our customers all in one place without having to put thousands of miles or get frequent flyer points.

But the main theme again seemed to be, how do I use AI? How do I use generative AI to generate business results? How do I drive results out of it?

I mean, to be fair, I would say the majority of discussions that we have and the majority of thinking was around efficacy of cost, right? Cost leadership, operations, how do you drive costs down? How do you do things better? But there are, I would say, kind of side benefits for this because a lot of the ways when you introduce AI to drive costs down, you're also introducing a better customer experience, more contextual customer experience.

So, for example, rather than being on a phone and talking to someone for 15 minutes, you know, you start to kind of focus in on resolving a problem at the first attempt, understanding why the customer's calling, using all of that information to provide a very direct answer and resolve it very quickly.

So a lot of the discussions we had were around generative AI and driving costs down and running POCs. So I have to tell you, I don't think there was one meeting that I went into that any customer said, "Hey, we are not interested." So every customer, first of all, was interested. This is number one. Second, every customer is experimenting, doing something in some shape or form.

Thirdly, I would say that we are fast approaching the crossroad of trying to show business results, right? So let's say 2023 and the early part of 2024 was around like experimentation and pilots and things like that. I would say the second half of 2024 and going into 2025, we are really going to have to prove the technology out, show real results, show how concepts of trusted AI is going to work, how security is going to work in AI, because these were all very, very big topics.

And talking about, I would say, key topics, the cost efficiency of AI. This was a big topic that came out. Clearly, we have, as Amdocs, we have a lot to say in this because everything that we do and we look at, we're trying to work on a model that allows our customers the optionality to use different large language models, to use proprietary models, to use open source models because we know in the long term, you're going to

have to kind of balance these out rather than maybe double, triple down on one and be locked into it.

And this is also some of the announcements we made were, you know, kind of an enhanced partnership or extending our partnership with Microsoft on not just the customer experience stuff, but also around the generative AI stuff, expanding our partnership around NVIDIA so we can bring what we call a large language model garden to our customers so that they can have optionality on these things and allowing them to kind of architect solutions that gives them the best of both worlds because this space is moving so fast.

You know, I have to tell you, Tal, I kind of pride myself of trying to keep up with technology and what's happening. This is probably the fastest space I've seen in anything in the last 20 years. Like you wake up, you know, everything is moving. You know, a couple of weeks ago, I'm sure you saw the results around Sora and all the excitement around the world. Tomorrow, it's something else. So I think our customers are trying to get a grip, trying to get a handle on where all of this is moving. But at the same time, they also realize that they need to start showing some results and delivering some results on this stuff.

Tal Liani^ So I want to take a step back maybe and just ask why Gen AI is important and not regular AI like the traditional AI we had before. How do you define Gen AI and what's the difference really?

Anthony Goonetilleke^ Yes, at the end of the day, you know, we can probably, you know, chat for an hour about it. But if I had to kind of summarize the benefits of generative AI versus kind of traditional machine learning or AI, think about, look, one of the best examples I can use to articulate it is I am surprised at some of the emergent behavior once you feed in all of this data, right?

So for example, we have models where we take large language models. We have our telco taxonomy. We feed in real time data of customer history, customer bills, customer transactions. We put all of this information together. And then you essentially go and you give it a problem to solve. You ask it a question. And how articulate the information is, how precise it is, some of the emergent behavior that you didn't expect to come out of it. I have to say sometimes, you know, we're showing everything we showed, for example, at Mobile World Congress was live. So these were live large language models that we were showing. And I have to say, I always hold my breath, right? When you're doing these pilots with live models in front of customers, you're typing questions, it's returning response, you know, you're like, oh, my gosh, I hope, you know, it works the way we expect it to work.

And sometimes I would say six out of 10 times, I was even positively impacted going, wow, this is a really good answer. I didn't think of that. So I think this emergent behavior, this one plus one equals three, is some of the wow effects of generative AI, right?

And the leapfrog, like even when you take a step and look at some of the stuff that Sora is doing in terms of text to video, for example, right? This is a leapfrog. This is not, you know, you're going into a video editing program and it can assist you and help you do things better.

You're now jumping a couple of categories here and you're going from text all the way to video. And think of these capabilities, like one of the things I think was also pervasive throughout the conference was the angle of generative AI coupled with physical devices. So I think in the next six to 18 months, we're going to see a lot of physical devices coming into the space built in with generative AI, right? So you can think of, you know, you've heard of Humane coming out with their little lapel device that you can put on, that basically if you think about it, it's taking videos of the world, coupling it with large language models or multimodal large language models, and giving you analysis and results on the fly.

Like this is groundbreaking in any shape or form.

And then, you know, I was playing around with the Apple ProVision, for example, and someone the other day got a large language model and ran it on an Apple Vision Pro. So this is the ultimate edge architecture and you're running all of this technology. So I think the innovation that's coming out of it, the speed and the results is very different to traditional AI. You know, traditional AI, you needed a bunch of scientists, you needed a bunch of technical people. You know, today a 14-year-old kid sitting in Vietnam can create something amazing using generative AI. And the same technology, the productive emergent behavior that we're seeing, I think is groundbreaking.

Tal Liani^ Yes. And what's your business model around Gen AI?

Anthony Goonetilleke^ Yes, so if we can go to jump to slide five here, Matt, just we'll illustrate it just a little bit better here. So we come at it with two approaches. So our first approach is clearly we have a product suite, our CES suite, and every one of our modular components are infused with generative AI. So that's kind of the left-hand side of the slide. So if you take something like our catalog, it comes out of the box with copilot capabilities, with guiding capabilities.

So yesterday, if you had to go for a two-week training course on how to use our amazing product catalog that we felt so good about, tomorrow you could maybe have one day training and you could come and say, "Hey, create me a 5G plan and look, I want to target people in Dallas or I want to target millennials in Dallas," and click return, it'll give you five sample plans and you can pick it and you can edit.

So all of our products now are rolling out with AI capabilities, same with our CPQ product, our billing products. So you will see in the next kind of starting of earlier this year, it started to roll out.

On the right-hand side, what we kind of traditionally, or internally I would say, call type B is we've come with our use case factory or our app factory of capabilities of we think you can put on top of any system. So we're system agnostic, it doesn't matter if you have an Amdoc system, if you have a non-Amdoc system, basically using our amAIz GenAI platform, you could leverage the abilities of generative AI and bring it directly to your business operations, to your call centers, or even to your customers and expose it.

So if you look at our models, it's yes, we're going to double down on our products and they're going to come rolled out. So if you have our products, fantastic, you're going to benefit from this. But on the right-hand side, even if you don't have our products, or if you have some of our older products, you can still benefit from the generative AI capabilities.

Tal Liani^ Got it. And about three quarters ago, you launched a platform, you call it amAIz, I think. And can you go over it? What's this platform? How does it help you? How does it help your customers?

Anthony Goonetilleke^ Yes, and if we can just jump onto the next slide there. So this platform is very telecommunications focused. So we took our decades of history of logical data models, of how entities relate to each other, and we kind of infused it into training a model to be able to understand the world of telco or the telco taxonomy.

So for example, take something like proration or early termination fee that telecommunications companies are very familiar with. But one of these simple business ideas have so many different correlations that relate to it. And our model understands this.

So for example, when you ask it a question, when you feed it information, it understands the impacts of all of these entities. The second is we come with a set of use case kits. So it allows our customers to build these use cases very fast because, you know, it will be fantastic and it will be great for me to say, hey, Tal, like I have 40 use cases, which I do today, by the way. But in two weeks from now, maybe there is something else that you want to work out. Maybe there's a combination.

So our use case kits are very modularized. So it allows you to pull in components together, stitch them together, create your own use case components.

The third part is around trusted AI. So we provide a governance framework. So it gives you the guardrails because the last thing you want here is to break your policies, your procedures, your rules in the catalog on what discounts you can give and things like that. So we infuse all of these together, together with, you know, what your customer really wants to achieve.

And the final thing I would say is we allow you the optionality, as I spoke in the start, to integrate to any of the large language models. So whether it be open AI, whether it be, you know, Meta's LLaMA 2 or Mistral, allow you this optionality to use different large

language models. And in some cases, by the way, in our use case and in our apps, we use multiple large language models, even within the same transaction. And the reason for that is, again, leading towards cost, right?

So it's one thing of saying, "Hey, like, I just want to go and ask something, a question," which is fantastic. But imagine now the repetition of trying to do this thousands and tens of thousands of times over a whole year. Your cost can go up very quickly, very fast.

So this is something that we're very focused on. And, you know, something that we also bring to the table is the ingestion of all your data. So, quote unquote, you know, you can think of it as the plumbing of all of your real-time data, how you access it. All of this needs to be done and be orchestrated in the correct way.

And if we have time here for a second, I'd love to, like, jump to a little demo here to show you very quickly all of what I've spoken about in action.

So if you see here -- actually, let's just go back to slide seven here just before the video starts for a second. Can we just stop the video? Yes.

So if we go to slide seven, here we have what is, you know, traditionally what looks like a customer bill. And, you know, you can imagine there's all sorts of charges on the bill, you know, going from, you know, what you pay for, bandwidth increase. I would say, you know, as an industry, we probably still need to do a better job to make this better.

But this is what customers usually see, right? And what we do is, you know, we did an analysis, for example, and we saw that about 30% of people calling in are very simple. Hey, like, there is something wrong with my bill, or why is my bill so high, or why was my bill higher this month than last month?

So if you look at these two things, even if you are an agent, you're sitting there, you're comparing bills, you're trying to figure out what's going on, you're trying to figure out what changed. But let's jump to the video for a second now. So this, by the way, this process, even for the best agent, it would take approximately 15 minutes to answer the question. But here we have a simple question where you come in, hey, why is my bill higher than the last one, right?

So we plug this information into a large language model, we go to the Amdocs and amAIz platform, we look at real-time data sources, including the last six months history of your bill, we look at the current offers, and we say, hey, your bill is high because due to a few factors, you exceeded your plan's data allowance. This was an extra $15 charge. Like it's very simple English, it's very clear, but it also gives you opportunity to potentially upsell here, right? Because then the question was, hey, are there any more plans with an increased data allowance?

So straight away, we look at the person, we go to the product catalog, we pull out offers from there, we combine it with the contextual information that was provided, and then we

come back and go, yes, hey, we have some plans for you. There are two plans that we think is relevant for you. And here's the magic, right? Like the click to action here. Like right there, you can say, hey, buy this plan now or change to this plan, you confirm it, you're done.

So here you have within like 45 seconds, and we can just stop the video for a sec.

Within 45 seconds, you not only, you had a customer that was irate calling about a bill, not understanding it. So you didn't take 15 minutes to try and understand the two bills and explain it to him. You also created an opportunity to upsell a new product, and probably the customer, the chances are the customer walks away fairly satisfied. Number one, getting the correct information on what happened. Number two, with the opportunity to upsell. And number three, he just said, oh yes, this looks good. Let me do this, and it's orchestrated.

So Amdocs really is probably the only one position in that way that can do all of those things, right? Understand the information, provide it, understand the contextual nature, but then also orchestrate it and have that click to action, right? Where you just click it and make all of that magic happen behind the system.

So that's what kind of amAIz brings together.

Tal Liani^ So you said in the past that your Gen AI platform is geared towards the telco market. Why can't telcos develop the same thing? Why do they need your help?

Anthony Goonetilleke^ Yes, a few different things. First of all, of course they can, but it's going to take them time and it's going to take them money. We took decades of information of our logical data models, of our telco taxonomy, of our experience. We also, once we ingested the data, we also trained it. Like we saw some behavior that we felt needed to be fine-tuned. So we did all of this fine-tuning of the model. We did all the inferencing, right?

So when we come to the table, we've done all of that stuff for you. We've invested in R&D and we bring all of this to the table for you. And the only thing you need to do is to make it contextual to your customer base.

So you're now leapfrogging easily 12 to 18 months worth of work, tens of millions of dollars and decades of experience across the globe that even if you had it, I'm not sure how you're going to stitch it together. And I feel like this is the value proposition.

But then there are two other little things. The ingestion and understanding of real-time data. There's a terminology called RAG, which is really the augmentation of real-time data into a generative AI model. We bring this to the table and the know-how, and we even have some patterns around it, for example. So when we're ingesting, for example, information from a bill, we just don't take the bill as a PDF and shove it into a large language model. We take the source files, we understand the detail XML files, we take

the proprietary information, and we know what the changes to look for and how to ingest it.

Same with the pricing components, the discount components. You add all this together and you improve the level of accuracy, right? Standard GPT model that's available in the market, we did some side-by-side tests, and even the best ones are providing 60% to 65% accuracy. Currently, we're achieving levels of 93%, 94% accuracy, and we believe this will only go higher. And I think this is the differentiator of using Amdocs, number one, and just the acceleration.

I mean, we are doing pilots, like I tell my team, like, "I don't want to do a pilot that's going to take more than 90 days," right? We can do pilots to show you actual business results, including integration, ingestion of data, the orchestration, tuning it or fine-tuning it to your customer base in 60 to 90 days.

And I don't think there's really anyone that can achieve that level of productionization of generative AI in the telco space today, at least not without giving time and money.

Tal Liani^ Yes. You're worried about competition in the space?

Anthony Goonetilleke^ Yes, look, I mean, I think competition, first of all, competition is healthy, it pushes all of us. We are waking up, I would say, for the last 12 to 18 months, nothing else but focused on this in every aspect of our business from front to back. And one thing that really was led from Shuky, our CEO, all the way down was this mantra that embracing the technology in all aspects of the business, right?

And this is not like a little side homework project that someone needs to do. This is the CEO, our CEO of the company, standing up and saying, "Hey, this is something that's pervasive. This is something that's going to change the way we do things. And I want our entire business to change and do a paradigm shift to align to this."

And this is what we did very, very early on. And I'm very grateful for that because now we're starting to see the result for it. So are we worried about competition? Always, because we look at our competition and I'm sure they're looking at us. But on the other hand, I think we're very focused on us, meaning we are focused on delivering customer results. We're focused on increasing our accuracy levels. We're focused on giving optionality for large language models. So there are a lot of things that we need to be focused without looking at our competition. And maybe, you know, they can be focused on us for once.

Tal Liani^ Yes. Now, I said it before, when you presented it, you said it's focused on telcos. What are the opportunities to expand it to cable, cloud, maybe even enterprise?

Anthony Goonetilleke^ Yes, look, when I say telcos, I kind of talk about telecommunications in the broader aspects, right? So cable, wireless, we're agnostic to that. Definitely, we have many of the world's best cable customers. We have, you know,

dual play, triple play, quad play all over the world. So when I talk about service providers, I'm talking about the entire gamut here.

Definitely, around enterprise as well. So we're seeing some early use cases around private networks and doing some kind of spectrum adjustments for that and some use cases. And of course, our framework potentially could be used in other places.

I mean, that's not what we're focused on right now, meaning other verticals outside telecommunications, but it could be very easily adjusted to that if we so decide to go down that space.

But right now, it's around communication service providers in the broader context, MVNOs, cable providers, wireless providers, multiplay providers, and enterprises that provide connectivity.

Tal Liani^ Got it. So Gen AI could be both an opportunity and a threat. How do you view it? Like, you know, does it provide tailwind for your growth or is it a potential disruption to your business model?

Anthony Goonetilleke^ Yes, look, I think, first of all, I think the disruption to the world, right? Like, let's start with that. I think there are going to be people that embrace it and there are going to be people that are sitting on the sidelines and they're going to just be people that are talking about it and doing nothing about it.

So first of all, I think the key thing is it's here, it's real, it's something that people need to kind of embrace. So absolutely, it'll be a disruption. And I think disruption is not a bad thing, right, if you're on board and if you're on board to grabbing it and you understand it and kind of that's where you're going.

I think disruption is a bad thing if you're trying to push against it and say it's not going to happen. So I think from this perspective, I'm not worried, necessarily worried about the disruption of it. I think everything that we build, because we have a very, very solid generative AI focus, all of our product managers, for example, in my group, we have a big product management organization. Every one of them knows that if I have a review with them, if I have a meeting with them, the first thing I'm going to ask is, okay, tell me how you're infusing generative AI in this product, right? Like, tell me how you're capitalizing it.

Tell me how the business is going to benefit from having this capability, right?

Like if you take one of our products like CPQ Pro, one of the cool things we bring to the table is a revenue acceleration opportunity for our customers. So it's not just about Amdocs benefiting. The first and foremost, the question I ask is how our customers benefit from that.

Attachments

  • Original Link
  • Original Document
  • Permalink

Disclaimer

Amdocs Ltd. published this content on 14 March 2024 and is solely responsible for the information contained therein. Distributed by Public, unedited and unaltered, on 15 March 2024 14:38:08 UTC.