The Future of Cloud Infrastructure and AI
Your Title Goes Here
Your content goes here. Edit or remove this text inline or in the module Content settings. You can also style every aspect of this content in the module Design settings and even apply custom CSS to this text in the module Advanced settings.
Notes
In this episode of The Hitchhiker’s Guide to IT, host Michelle Dawn Mooney welcomes Sid Rao, CEO and Co-Founder of Positron Networks, to delve into the future of cloud infrastructure and the transformative role of AI in IT operations. With over 25 years of experience, including a decade managing cloud infrastructure at AWS, Sid shares expert insights on how organizations are adopting AI and cloud technologies to enhance scalability, tackle security challenges, and drive innovation. As hybrid and multi-cloud strategies gain traction, Sid highlights the complexities and opportunities these approaches bring to modern IT environments.
From streamlining IT operations to automating workflows and optimizing resource management, cloud and AI are reshaping the technology landscape. Sid discusses real-world examples of how AI is boosting developer productivity, addressing data sovereignty issues, and enhancing security. This episode is a must-listen for IT leaders seeking to future-proof their infrastructure while navigating the challenges of adopting emerging technologies.
Key Topics Discussed:
-
How hybrid and multi-cloud strategies are transforming IT scalability and cost management.
-
AI’s role in streamlining IT operations, automating workflows, and enhancing productivity.
-
Addressing cloud security and data sovereignty challenges while maintaining compliance.
Tune in to hear how cloud and AI are paving the way for the next era of IT transformation, and gain actionable strategies for navigating the rapidly evolving landscape.
Transcript
Welcome to the Hitchhiker’s Guide to it, brought to you by Device42. On this show, we explore the ins and outs of modern IT management and the infinite expanse of its universe. So buckle up and get ready to explore the ever changing landscape of modern IT management.
Michelle Dawn Mooney
Hello and welcome to The Hitchhiker’s Guide to It, where we explore the most transformative trends and best practices in information technology. I’m your host, Michelle Dawn Mooney, and today we are diving into the future of cloud infrastructure and the integration of AI in IT operations. From enhancing scalability to driving innovation, cloud and AI are reshaping the technology landscape and enabling agile data driven business models. We will discuss how organizations are Are adopting these powerful tools to solve real world challenges, optimize operations, and build the foundation for the future of digital transformation. I have a great guest to bring on today for the discussion. Sid Rao is CEO and Co-Founder of Positron Networks. Thank you so much for being with us today.
Sid Rao
Thank you, Michel, for having us. I appreciate it. And, uh, look forward to the questions and the conversation today.
Michelle Dawn Mooney
Yeah, looking forward to it as well. Before we kind of dive in there, can I ask you to give us a brief bio so we can learn a little bit more about you before we get into the conversation?
Sid Rao
Absolutely. So, um, I’ve been in the software industry for about 25 years. Um, I built software for, uh, a broad range of small and large companies, from Microsoft to AWS. I spent ten years managing cloud infrastructure at AWS in a leadership role. Um, and then recently I’ve started a scientific computing startup focused around the infrastructure challenges that scientists face, especially as they leverage AI to accelerate and drive drive, drive and acceleration of scientific outcomes for their for their research. So it’s been a it’s been a quite a different broad garden variety of large, small and tiny software companies along the way. And so but being always involved in cloud services for a long while.
Michelle Dawn Mooney
Yeah. So clearly we have the right person for this conversation today. So we have a lot to cover, a lot of things to to go over today and a limited amount of time. So let’s start off here with trends. What trends do you see shaping cloud infrastructures future.
Sid Rao
So um, look, I think the impact of AI is being profound on on cloud services. Uh, first of all, I’m, you know, one thing I noticed, and I follow a broad variety of leaders in the cloud computing space, is there tends to be an overemphasis around generative AI and the impact generative AI will have on cloud infrastructure and cloud services. But where the impact is really showing up is in the types of computing services that are being purchased, whether it’s graphical processing, unit based services for inference use cases and inference applications to less grid computing type use cases, and grid applications to different storage technologies and storage architectures that we’re starting to see get deployed in the cloud. Um, so ultimately, AI is having, you know, a lot of conversation goes into bots and agents and, you know, Rag and other generative AI concepts, but it’s actually having a profound impact on the economics, the infrastructure and the types of services I see customers deploying in the cloud. Um, and I think it also is having a modification to the security model for cloud as well. You know, usually cloud services have been deployed in a more of a single tenant type of service where each customer gets their own database or gets their own, you know, EC2 instances in the case of AWS. Um, and we’re we’re starting to see a need for more multi-tenancy just due to the sheer cost of AI services and GPU services that are involved in powering these AI applications.
Michelle Dawn Mooney
So clearly, a lot of changes that we’re seeing as we just started to touch on how has cloud adoption affected IT operations and scalability.
Sid Rao
So cloud operations, you know, look I, I see actually a change that’s happening in the industry. Um, and a transformation that’s underway. Uh, historically, cloud was the craze. Uh, when I talked to large enterprise customers, they were all migrating large workloads, large applications to the cloud. Um, interestingly enough, as I’ve seen, I become more and more prominent. Um, one of the things I’m noticing is that this trend trend is now moving a little bit in the opposite direction. Um, a number of the large enterprise IT decision makers I talked to are now starting to talk about on prem deployments. Um, I think a little bit of that has to do with the fact that GPUs and the infrastructure required to power AI is very expensive. And, you know, the margins and the margin structure that goes with, um, some of these cloud services just doesn’t work for these, uh, for, for leadership in these IT organizations. I think a second driver that’s driving this need for on prem type deployments, again, uh, and this hybrid cloud type of approach to things is storage. Um, and the broad amount of storage that’s required to power these AI I applications, and where is that storage located? And the cost of storage over time in a cloud environment becomes challenging.
Sid Rao
So I’m seeing a lot more hybrid IT, you know, hybrid cloud deployments. Um, you know, a mix of private and public cloud. Uh, some very interesting use cases recently where they’re combining GPU based hyperscaler deployments with public cloud deployments, uh, control planes that are sitting in public cloud and data planes that are sitting in private clouds. So I, I would say five years ago, uh, the world was in love with with public cloud, uh, you know, everything was going to public cloud and there was no questions. Um, and, you know, I would see 100% of applications moved to public cloud. Now I’m starting to see this trend, which, you know, look, coming from AWS, it’s a little worrisome for me, uh, to, to see this, uh, of a desire for hybrid deployments between private and public cloud. I think cost is a driver, but I also think it’s data, locality security. Um, and I think some of the economic drivers of AI are having some impact on, on these decisions that CIOs are making.
Michelle Dawn Mooney
So how would you say AI is optimizing cloud performance and innovation?
Sid Rao
Well, so I mean, look, I, I think the cost of developing a software application is going to go down by at least, you know, three x, maybe four x over the next 2 to 3 years. Um, why do I say that? Well, look, assistants like cloud, uh, you know, sorry, assistants like AI agents and things of that nature, uh, we’ll have a dramatic impact on everything from DevOps costs to how much it costs to build an application, to how much it costs to deploy an application. I can go to ChatGPT and say, hey, please deploy this react TypeScript application into my AWS account and it will, you know, efficiently generate the instructions required to do that. And as a result of that developer productivity, developer operations productivity will go through the roof. So that’s just one direct impact that I think I will have on, on, on migrating applications to the cloud and leveraging cloud. I think the second challenge, though, that’s going to come about, is the entire security and data model for AI, and especially in a cloud, multi-tenant environment, will be a point of hot debate. Um, you know, you go to a large bank today. I was working with a very, very large financial services institution, probably the largest in the world, um, recently. And, and, you know, they just have a belief that the weights that are in a, in a, you know, GPT style model or a large language model, those weights are nothing more than a proxy to the customer data.
Sid Rao
So in their world, all models must be deployed and isolated and deployed in a very single tenant fashion, with its own GPU cluster powering it. And then they, you know, but at the same time they realize that doesn’t economically scale. Um, I actually becomes just as expensive as some of the human aspects they’re trying to replace. Um, and so then you get into the flip side of that argument, which is we’ll create, we’ll create these multi-tenant GPU clusters. But then you have a data locality problem. You have a data sharding and and distinction problem where, you know, two customers might accidentally share or mix data. Um, and so I think there’s going to be a huge debate in terms of the tenancy models that are used for cloud application deployments. Um, I would say the final impact that AI is going to have, apart from kind of the productivity and efficiency aspects of it, is around security. Um, you know, look, the the ability to use machine learning models to determine, you know, a broad variety of threats and threat actors and threat modeling. Um, it’s going to have a dramatic impact on how security is looked at, both from a attack perspective as well as a defense perspective.
Sid Rao
So, um, you know, I’m really excited about the impact that, you know, that AI is going to have on all three of these areas. Uh, but ultimately, look, you know, I think the final thing is AI is now now a household term. Um, you know, ten years ago, Ergo using a neural network was only left to those who were at MIT or at Harvard trying to do some of the most advanced work in artificial intelligence. Now, neural network is something that my niece is talking about as a as a med student. And, uh, as a result of that, everybody is going to want some level of AI impact on their application. And as a result of that, you know, I think I think the real impact that we’re going to see is we’re going to have to figure out how to effectively make AI easy for the everyday user to use and create models, for that matter. Um, and also make the dev ops and dev, you know, software development engineer be able to leverage AI in a cost effective way within their application without breaking the bank. And these are some pretty significant challenges coming up for the industry in terms of how AI will impact, uh, impact the cloud services industry.
Michelle Dawn Mooney
So, you know, as you said, AI is a household name now. Everybody is using it. If you’re not, what rock are you living under? Right. So there are so many applications when it comes to AI. What are some that come to your mind first and foremost that are enhancing cloud functionality and security?
Sid Rao
Oh. So, um, you know, in the continue is a great example of this. Uh, it continued at I is a is a tool that accelerates developer development, accelerates the ability for developers to produce code and to produce solutions. Um, and, you know, and of course, we have everything from the GitHub agents that we now see from Microsoft. Right. You know, Copilot for GitHub, for example, is a is a is another use case. Uh, Amazon Q from AWS is another common application that we see our developers using for hosting cloud infrastructure. And you know, and basically deploying CloudFront, sorry, cloud CloudFormation stacks and within AWS. So yeah, we absolutely are seeing um, just a large number of agents being developed both in the security environments as well as in the in the cloud infrastructure space, which makes it easy for developers and DevOps engineers to deploy infrastructures and application at scale to support their user base and their customer base.
Michelle Dawn Mooney
We’re always looking for ways to make things easier and more efficient. How can cloud and AI together streamline IT operations?
Sid Rao
I think, you know, look at this stage where you’ve got hallucinations, you still have an error rate within the models that are being used. Um, you know, I think the first place that most CIOs should look at leveraging AI is as a productivity enhancing agent for their software developers and DevOps engineers. Um, and by that, what do I mean? I mean, it’s an agent where it provides assistance and performing, you know, common tasks or automating common tasks within the IT department. Um, whether that’s a patching task or a upgrade task, or a pipeline deployment task in a CI, CD deployment. Um, you know, look, I what I will say, though, is, you know, with caution, some of these agents can that automate those tasks can be brought into play. And why do I say with caution? Well, look, you know, it’s all about what your service level agreement is with your user base. If your SLA is, you know, four nines I hallucinations at a, you know, 0.5% rate will potentially break your break your service level. Um, so you have to be very thoughtful about hallucination and error rates. With an AI realizing you will never eliminate error rates with an AI.
Sid Rao
That is just a function of neural networks and a function of the mathematics involved behind a neural network. And as a result of that, um, you know, if you’re if you’re okay with a 1% failure rate, you can use it in an automated fashion, but as a productivity enhancing service where you’ve got a human being making a decision, uh, it’s absolutely awesome. And I think, you know what we’re going to find. And there are examples of this, like AWS just recently published a, um, a study where I believe they accelerated Java upgrades across an IT department by ten x, saving that fortune 50 company. Um, you know, north of, you know, literally tens of millions of activity. So, you know, look, I think AI is going to be about productivity enhancement for now until the error rates can be more in line, and we start to use multiple bots together for redundancy purposes. And that also means the costs have to come down for AI as well. But you know, initially in this productivity story I think it’s think it’s going to be a it’s going to be a great have a great impact on on IT operations.
Michelle Dawn Mooney
And you touched on productivity and I want to dive a little deeper. How does this combination impact both productivity and resource management.
Sid Rao
So look on the resource management side it’s a double edged sword. It’s really a double edged sword. Uh today if you want to go buy buy in for the infrastructure that’s required to power AI applications, uh, it’s not easy. Um, you’re if you’re buying it through services like open AI or buying it through anthropic, and it’s a managed service and they’re responsible for the underlying compute clusters. Look, it’s really straightforward to consume infrastructure there. But the challenge there is you’ve got it’s almost like a software as a service application hosting your most sensitive data. Um, and you have to think twice before you give one of those organizations your access to your entire data repositories that are required to power these AI applications. The second approach is to try to host it on your own and use Lama. Use many of these open source AI models to do it. And the the flip side challenge you’re going to run into now is you have to host the infrastructure. And GPUs are not cheap. Um, you know, they are very expensive pieces of hardware. Um, there are intense amounts of demand. You’re still talking about month long waiting lists to get access to them. Cloud providers often require you to purchase reserved instances and reserve GPU capacity to gain access to it.
Sid Rao
Um, so I think, you know, ultimately the it’s kind of funny. I mean, AI is going to definitely help drive efficient use of IT infrastructure and IT resources and cloud resources for applications that don’t require AI, but requiring AI in its own right will actually drive up infrastructure utilization with very expensive resources that are these GPU powered resources. Um, and then the final thing, you know, I think, Michel, that folks in the IT environments are kind of forgetting about is the sheer amount of data that’s required to make these models be very successful. Um, you know, look, every cloud provider out there anthropic OpenAI from a model provider perspective, all say, oh, just give us a couple fine tuning examples and we’re able to do a great job. Well, no, that’s not exactly true. Um, you know, if you really want to train the model on a broad variety of nuance and, and various different perspectives on the data, you have to have a large data set. And storing that large data set in a cloud environment is not a trivial problem. And worse yet, you know you can’t just store it in S3 and AWS, for example, or in blob storage in Oracle.
Sid Rao
You have to you actually have to store it in something that’s fast enough to feed the GPU. And, um, the cost of storing it on a high speed access mechanism with the latency that’s necessary to power that GPU is intensely problematic for IT CIOs and IT folks to handle today. So look, I, I think AI is a double edged sword. Um, and I think the market’s starting to get past the hype and is starting to realize that, look, it has the intense power of taking applications that don’t require AI and making them efficient and successful, but it also has the intense probability of making it. Applications have runaway costs as well. So, Um, it is a it. You know, I you know, I was reading about a venture capitalist the other day who basically said, look, um, usually when we’ve had these disruptions, the disruption has reduced reduced costs for customers by ten x. I has yet to show that it can reduce costs by ten x. Um, and that is going to that is going to have a profound impact on how I, how we get past this hype cycle as an industry and how how much impact AI has on the industry overall.
Michelle Dawn Mooney
So I want to continue down that path, talking about a few pain points, because this isn’t just butterflies and roses what we’re talking about. I mean, a lot of solutions, a lot of good coming from AI and combining it with cloud infrastructure. But let’s talk about some challenges that organizations are facing when it comes to cloud security and data management. What are your thoughts there?
Sid Rao
Oh, do we have enough time? Yeah. That is uh, that is a very, um, very, very challenging story. Um, so, first of all, a lot of security professionals in these IT organizations don’t really yet understand what data is stored in a model. Um, and this is a very simple problem for the people who build models all day. But for the for your traditional IT security person, it’s it’s really concerning because, you know, you go to ChatGPT and you ask it a question and magically it’s able to retrieve data and render basically what could be potentially sensitive data back to the user. Well hold on. That data is actually in a vector database that is actually, you know, actually containing the real sensitive data. And that vector database absolutely needs to needs to be, you know, sent, you know, securely managed and handled because it maps between basically the way that a model like like the GPT model models the world and the real data that’s being used for customer, customer rendering, customer responses. Um, and I think, you know, look, I think that the, the thing to keep in mind about the whole data sovereignty issue and how much data is in the model is education is super important. And these model providers need to start being a lot more transparent about where does the data that’s being used to train the model come from? What happens during fine tuning? What happens? Where is the vector database used stored, and how is that? How is that data managed and what’s the tenancy model of that data structure. They really need to come forward with a very transparent threat model. Unfortunately, in the process of doing that, they are worried about giving up their intellectual property.
Sid Rao
They’re worried about exposing how the model was developed and where what data was used to train the model. So we’ve got a lot of tension. And between the desire for innovation and creating the best models for customers, we’ve got another piece of tension out there, which is the, you know, how the users and, you know, want to use these models. And then you have your security managers and security and compliance folks who really don’t quite understand what’s going on within a model. And I don’t blame them, because, you know, you’ll often hear an ML engineer say, well, all that a model is, is just a big array of floating point numbers. Uh, what’s the what’s the harm in that? There’s no customer data in it. Well, that’s not exactly true, because those floating point numbers result in a token which maps in a vector database to actually extremely sensitive information. So, um, you know, there’s, uh, there’s definitely a need, I think, for more transparency and governance, um, and threat modeling here. Um, and ironically, the society hasn’t keyed up on this yet. They’re more focused, I believe on bias, and bias is very important. And bias is very concerning in the world of AI. But actually security is where the real, real threat lies because these models are, are no longer just historical data predicting a future result. These models are continuously learning and continuously evolving. Um, and we’ve reached that point and, and that that’s going to change the entire security and threat model, uh, that that needs to be applied for AI applications in IT environments.
Michelle Dawn Mooney
There are a lot of players in this field. Why is it so essential for organizations to choose the right technology partner? And what impact can it have on their long term success and adaptability?
Sid Rao
So I actually am of the belief that, um, in the world of AI, it is way too early or it’s way too early to pick a winner and a loser. Um, I actually think that’s a fundamental mistake. Um, you know, it is a sea change, and the natural desire is, you know, you can’t go wrong with Big Blue is what we used to do with with IBM, right? And there are many cloud providers out there. Um, I used to work at one where the concept would be you can’t go wrong by picking AWS. Um, you know, they’re the number. They’re the first. They have the largest market share. Um, they in in terms of actual growth rate, they have the the fastest growing cloud market share. Uh, I’m not talking about percentage growth rate, which is, uh, which is not a really a good way to measure growth. Um, and so it’s very natural for a CIO to say, look, uh, you know, my CEO wants me to do some stuff with AI. I’m just going to go find the people from from AWS, run a hackathon, and then announce a new collaboration with AWS for solving all my AI challenges. That’s like I have actually seen that happen now a few times. And, um, I will tell you, that’s a mistake. Um, because, uh, startups, uh, the small companies, they, uh, there’s a breadth of innovation going on. Um, and, you know, I think there’s is two ways to look at it. If you’re picking a foundation model, that’s a very different story. Yes, you’re going to have to pick between ChatGPT and OpenAI and sorry, OpenAI and anthropic and, you know, deep, deep mind or, you know, perplexity or you name it, right. Like there are 100 different foundation models out there, um, open source llama, you know, and so if you’re picking a foundation model, yeah, you’re probably going to have to pick something because you’re building a custom model.
Sid Rao
Yes. You’re definitely going to have to pick a provider. But if you’re just looking to integrate AI into your existing applications, no, let the let let many seeds sprinkle is what I would say. Um, create a, you know, create a clear policy mechanism for deciding when a model can be used and not used for with what type of data enforce that policy. Then let your developers Pick the models they think are going to be most successful for the applications they’re building. And then, you know, and then measure, you know, measure user experience, measure those net promoter scores for those applications and see which, you know, models are doing well in which environments. Um, and be this is a learn and be curious moment for it. Um this is not a select your partner moment at all. I think if we do that, we’re going to make a mistake. Um, and, uh, you know, I think that we’re not even done yet. Um, there is no clear winner yet in this space either. Um, and I don’t think there will be for probably another decade or so. Um, at the rate that the investments are going right now. So anyway, that’s that’s how I, how I feel about the whole partner selection thing. Don’t select a partner is actually what I would tell you right now. Um, you’re, you’re going to pay a price for doing that. Um, the costs of, uh, of supporting multiple different models and partners is a lot cheaper than picking a single partner and losing innovation and losing a capability that your competitors will be able to offer.
Michelle Dawn Mooney
You mentioned before that AI is a double edged sword, but we don’t want to just talk about the bad stuff, right? We’re talking about the good stuff, too, because there’s a lot of good that really can come from this. So this is my favorite part. Can you share examples of successful AI integration in cloud operations?
Sid Rao
Oh, yeah. Um, you know, a great example, uh, is, uh, in fact, I was just reading about it today of, uh, how, um, you know, they’re starting to use AI for in models specifically in detection of distributed denial of service applications for in security environments. Distributed. Distributed denial of service is an extremely complicated problem to solve because you’ve got traffic originating from all over the world, all kinds of IP addresses, all kinds of originating networks, and you’re trying to basically block it and allow that your actual legitimate customer traffic to come through while blocking all the illegitimate traffic that’s coming from an attacker. Well, um, I know Cloudflare, Cloudflare and AWS and Azure all have now got basically a generative model running against their web traffic logs that allow it to detect when traffic is illegitimate and coming from a DDoS attack source and automatically blocking it. Um, and that model has become really good because, um, it’s able to basically, you know, one of the things that DDoS, um, attackers try to do is they try to simulate human behavior with their illegitimate traffic. So it becomes really difficult to determine, is this really DDoS traffic, or is this actually traffic that is legitimate and generative? Ai models are able to discern when the traffic truly is automated. Um, and so I, you know, look, I think in the world of security and log discovery, log scanning, um, threat detection, uh, it’s going to have a massive impact.
Sid Rao
And I think that’ll move on to availability. Um, so being able to automatically scan logs and determine when there’s a failure. And I think the third application where we’re going to see, um, you know, Kai really shine is in cloud deployments, pipeline creation, CI, CD, um, because these are very repetitive tasks. So like making sure operating system patches roll through your entire infrastructure fleet in a consistent way. I’m definitely going to see. I have an impact there and those repetitive tasks. So, um, yeah. So I think those are the kind of the three areas, but they tend to be in the bowels of it in terms of impact. Right. Um, they’re not, um, you know, in the front door, uh, you know, I’ll also give you an example of where I’ve seen I have a reverse, um, impact, which is, you know, in the contact center space. Um, we’ve seen I, you know, make claims of replacing human agents, things of that nature. Yeah. No, uh, those have been colossal failures. Um, so, again, I think where it’s an assistant, it does great where it tries to be a drop in replacement. It’s still lacking. And, um, I think it will stay that way for a while. Um, and especially when error rates are, their rate requirement is less than 0.1% 1% or 0.0001%. I think that that rule is going to stay true for a while.
Michelle Dawn Mooney
Following up to that, staying true for a while and looking to the future. I know this is really hard to pinpoint because it’s just ever changing. And you know, the future is so unknown. But what are your predictions for cloud and AI evolution over the next, say, decade?
Sid Rao
Um, you know, look, I, I think, you know, I would be, uh, I think one prediction that I will make is every prediction that any, any expert is going to give will be definitely wrong. Um, I will tell you that, um, no, I will say there are some obvious, uh, obvious things that we can point out. Uh, one is that this entire business of GPUs and how, um, you know, how AI models execute this, this won’t be a problem in five years. And I would say even and within ten years, the cost of doing these massively parallel matrix operations required to power generative models will be it’ll be very cheap and very cost effective. We will build the silicone to, uh, to to make this a commodity operation where it’s pennies per hour to run AI models. So I think that’s one thing. I think the second prediction I can easily make is that edge computing and the edge application of models, um, will eventually become not really that relevant anymore because, um, ultimately with bandwidth becoming so effectively, uh, you know, basically coming to the cost of bandwidth has fallen through the floor. Um, and the availability of fiber is so common and ubiquitous now. And 5G. I think what that drives is better models are cloud driven versus driven in an edge. So I think edge models, unless it’s truly like a super latency sensitive, you know, story. Like, I don’t know, something in in the kind of the, the, the, the physical security space, for example, comes to mind.
Sid Rao
Um, you’re going to see edge. Edge models, you know, becoming less and less relevant over time. I would say the third kind of prediction is I hate to break everybody’s heart, but I’m sorry, I don’t buy the fact that we will see models replace human beings. Um, anytime soon. Um, I there’s a liability issue, which is, um, who takes the liability? Who can you blame for failure for a model? And people ultimately want accountability. Um, and accountability is part of the benefits of, uh, of an application and of a, of a service for that matter. Um, and, you know, when it’s a bot, you really don’t know who is to be blamed. Is it the bot developer? Is it the ISV who used the bot? Is it the enterprise who’s hosting the application that has a bot in it? Is it who who holds the bag is really a problem. And so I don’t think we’re going to see replacement because ultimately you need accountability. Um, you need ownership. Um, somebody needs to own the problem. And bots, while they’re good at responding right now, they’re not good at owning things. And, um, that’s a that’s a that’s a major drawback of AI. And I don’t think you’re going to solve that, um, through better language models. So anyway, um, those are kind of three things I would think about when I’m looking at the future of AI over the next.
Michelle Dawn Mooney
How do you see AI playing a role in advancements in multi-cloud and hybrid strategies?
Sid Rao
Um, you know, I think on the multi-cloud and hybrid story, um, I look for now because of GPU availability issues and the extreme expense that goes with computing and running these models. Um, I think it’s forced a lot of enterprises to go into hybrid cloud, and it’s forced a lot of enterprises, as I already said, to go into multi-cloud, um, you know, the number of enterprises saying, look, I’m going to do a combination of on prem Microsoft and AWS. It’s it’s going through the roof. Um, and, you know, every cloud provider used to say, look, we want to prioritize all in deployments or we’re basically the enterprise says, I’m going to only use a single cloud for this entire application. And I don’t think that’s going to, um, going to work. Um, and I think that’s also going to drive, you know, a need for standardization of some of the core APIs that are used for powering cloud. Like, you can’t have a different API for starting a virtual machine in AWS and Azure and, and GCP forever. Um, I think we are going to need some level of standardization, almost like Posix was back in the day for the Unix world. Um, I think we’re going to have to see something very similar for cloud, because Cloud’s kind of turning into an operating system, right? It is an operating system for distributed applications. And, uh, you know, you can’t have three different APIs for how to open a file. Um, there needs to be a standard library. And I think Terraform is trying to do that, but you know it. You you’ve got four different versions of Terraform also in play for deploying applications. So, um, I think that we’re going to have to have, we’re going to see some standardization because of the fact that GPUs are are necessary across all of these environments. Um, and people will be, you know, always fighting for the largest model and the most computationally intense model. And that’s going to create hardware shortages, which then lead to multi-cloud and private cloud strategies.
Michelle Dawn Mooney
What advice do you have for IT leaders on cloud and AI adoption?
Sid Rao
I be, uh, experiment, but don’t commit. Experiment, but don’t commit. Um, you know, like I was saying before, this is not an environment to be betting the farm on AI and being bold about any one particular AI strategy. It’s one thing to say AI is going to have an impact on how we look at a problem or, um, you know, it’s one thing to say I is going to be a, you know, a a driver, a strategic driver or influence. It’s a whole different ballgame to say, we’re going to bet the farm. On moving this entire workload into being AI powered and replace the entire, you know, operations teams with AI bots and take radical action. Because, look, in every case so far we’ve seen that backfire. Um, it’s like any other new technology thing. There’s a hype cycle right now. That hype cycle will die. Um, and what’s going to be left are the durable applications which actually still generate business value. And that’s the final thing I’ll say is it’s easy to fall in love with AI, um, because it looks cool. It’s I’ll just be honest, like, you know, when I use open, you know, open AI or, um, one of these services. I’m like, wow, this is amazing. And it’s easy to fall in love with it. But you have to ask the question, what is the economics of what I just did? And was that economic value really worth it? And am I truly saving money as an organization because return on investment is still still king. Cash is king, and I think it senior leaders need to be very careful about, um, just realize it could be a fad still in some places. So, um, be cautious is what I would say about I.
Michelle Dawn Mooney
And I want to ask you last question. You know, because the bottom line is the bottom line, right? It comes down to the dollars. What strategies help align cloud and AI with business goals?
Sid Rao
Oh, the crucial thing is, uh, you know, the the the like look mechanism. Automation is the number one place AI is going right now. So taking a manual process and automating it, because you can use a crucial thing. There is test and test and test every single assumption of the financial model that’s being used to drive that application. So if you think you’re going to use AI to read through hundreds of thousands of bills and use that to automatically determine where to save money, I’m just giving you a really simplistic example. Well, first thing to test, can that model actually read the bill properly and come up with an understanding of what’s in that data structure? Um, then only then, um, do you move on to, you know, testing other assumptions within your model. So come up with a financial model, try it out, um, you know, and start testing every assumption in your financial model before you go off and invest in the Big Bang application. I know it seems obvious, but we often think software is consistent and AI is not consistent software. Um, so test your test your assumptions. Test it at scale. Um, before you go out and make the predictions that you’re thinking you’re going to take, um, the money you’re going to save or the money you’re going to make, um, from using AI, um, within your business operations.
Michelle Dawn Mooney
Any final thoughts? If people have questions, they’re looking for more resources based on what we’re talking about today. Where can they go? Where can you send them?
Sid Rao
Sure. Uh, I’m on LinkedIn. Please feel free to reach out to me on my LinkedIn profile. Um, second of all, uh, visit, visit my startup’s website. I’m happy to, you know, if you contact us through the website or through LinkedIn, I’ll be more than happy to help you with your your cloud infrastructure problems and to and to direct you to resources which who can also help you. So thank you for your time, I appreciate it.
Michelle Dawn Mooney
And thank you for being here, Sid. That is going to do it for this episode of The Hitchhiker’s Guide to it. And a big thank you to Sid Rao, CEO and Co-Founder of Positron Networks, for sharing his valuable insights into the future of cloud infrastructure and of course, the transformative role of AI in IT operations. Appreciate you being here. Thank you for your time, Sid.
Sid Rao
Thank you. Appreciate it. Thank you for your time. And, uh, and I appreciate it, Michel, for having me.
Michelle Dawn Mooney
A great conversation, and thank you again for being here. And I want to thank all of you for tuning in and listening to the podcast. If you enjoyed this episode and you want to catch more insightful discussions like today’s, don’t forget to subscribe to the podcast. And for more information on how Device42 can support your IT infrastructure needs, you can visit their website. I’m your host, Michelle Dawn Mooney. Thanks again for joining us. We hope to connect with you on another podcast soon.