Sponsored by Reblaze, creators of Curiefense
Justin Dorfman | Richard Littauer | Tzury Bar Yochay
Group Product Manager, Edge Cloud for Enterprise and Telecom
Hello and welcome to Committing to Cloud Native Podcast! It’s the podcast by Reblaze where we talk about open source maintainers, contributors, sustainers, and their experiences in the Cloud Native space. Today we have as our guest, Prajakta Joshi, who is a Group Product Manager at Google driving Edge Cloud for Enterprise and Telecom. She joined Google in 2015, has been working with Tzury since 2016, and really knows all about the history of Cloud Native and how it really started. Prajakta tells us about her background and how she ended up in the Edge Cloud space. We also find out where open source fits in, revenue streams going towards the open source ecosystem to make it more sustainable and more useful to Enterprise and Telco users, and more on the evolution of server less gRPC service mesh. We also hear Tzury’s really interesting idea of the future Cloud, and Prajakta shares her perspective on how she handles the good and the bad and where the focus should be. Download this episode now to learn much more!
[00:02:10] Prajakta tells us about her background and how she keyed into Telcos.
[00:04:27] We learn more about the Edge Cloud product, how Prajakta ended up there, and how she manages consistency.
[00:10:37] Prajakta mentions Kubernetes and Richard asks where open source fits in.
[00:15:12] Richard asks Prajakta if she has any thoughts about revenue streams going towards the open source ecosystem and how that would work.
[00:20:33] Prajakta explains more about how we have revenue streams going from Enterprise and Telcos back into the open source projects to make the entire system more sustainable and ultimately more useful to Enterprise and Telco users.
[00:25:27] Tzury wonders instead of having so much manual labor, he thinks the real future Cloud would be fully automated cloud, bottom up from the infrastructure level itself and Prajakta tells us what she thinks about this.
[00:28:31] Prajakta elaborates more on the evolution of server less gRPC service mesh.
[00:34:51] Tzury wonders what the oldest service is in Google that Prajakta knows of that is still running the same way it was running at the beginning, and it was not migrated to any fancy schmancy new tech that she can share with us.
[00:38:06] Find out about the two parts of service mesh and what the traffic director does.
[00:39:08] Tzury asks how Prajakta how it feels being the greatest on one end and still the underdog on another. Also, how does she deal with this frustration or excitement and affect her day to day.
[00:44:27] Find out where you can follow Prajakta on the web.
- Executive Produced by Tzury Bar Yochay
- Produced by Justin Dorfman
- Edited by Paul M. Bahr at Peachtree Sound
- Show notes by DeAnn Bahr at Peachtree Sound
- Transcript by Layten Pryce
[00:00] Justin: Hey, it's Justin co-host of this podcast. We just released Curiefence version 1.4, which includes support for NGINX. It has UI improvements, security improvements, and much, much more. So just go to curiefence.io/blog to see what else we improved. Now enjoy the show.
[00:21] Prajakta: When I started off as well. Like a lot of the focus was on, okay, what's the next cool tech we built. And slowly as you start building it, you start realizing like for example, service mesh, it doesn't solve all problems for customers. You really need to bring other bits and pieces and you need to keep evolving these technologies. And I think the thing, the pivot that I made was okay, let's not start from the tech. Let's start from the problem we are trying to solve, or the new experience we are trying to deliver or whatever business value you're trying to build and then build backwards, or integrate the stuff that exists. I think that is sort of the thing nobody really talks about.
[00:58] Richard: I love that perspective. That's brilliant, everyone that is Prajakta Joshi. She has joined us today from Google and she is so on top of her game that she immediately launched into one of the most insightful things we've had on this podcast. So welcome to the Committing to Cloud Native Podcast. This is the podcast where we talk about the confluence of cloud native and open source, how we get them together, how we build great things. Today, I'm joined by Justin Dorfman as usual Tzury Bar Yochay . Hi, as usual. Thank you so much for joining us. Both of you, other hosts. I'm Richard Littauer and we have our guest today is Prajakta Joshi. Prajakta Joshi is a group product manager at Google, driving edge cloud for enterprise and telecomm. She joined Google in 2015. Been working with Tzury since around 2016, and really knows all about the history of cloud native and how it really started. So Prajakta, you were just saying that what's happened already is that we just start building stuff, but we're not stepping back and saying, okay, who we're building this up for? How can we solve the problem? And what is the problem in the first place? Can you expound a bit more about your background? Because I know that you don't just think in terms of business needs and in terms of like the community, but you also are actually really keyed into telcos. Can explain how that happened?
[02:24] Prajakta: So a little bit about my background, I started off as an engineer. At the time, you know, this was in the early two-thousands, the problems for point problems, if you will, where somebody has an application and they need to scale it up because they have this massive growth of users. And at the time it was just load balancing technology. So I started working on that and then along the way, as different customer problems started cropping up, there was technology like CDN that I built out and perimeter based security and so on and so forth. And then slowly what happened was customer workloads themselves stop being in a data center or in a single place. And they started proliferating and appearing wherever they showed. For example, somebody could not migrate all at once to cloud and they were sitting on premises and slowly the kind of technology that you needed to solve these problems got more and more complex.
[03:18] You needed multiple pieces to go solve a customer problem. And then, you know, often like you brought up the case of telcos. For cloud providers, our first set of stakeholders that we solve for enterprises like your retailers and your financial institutions and healthcare and so on and so forth. In the last few years, you've seen a lot more interest of telcos to go adopt those same cloud native paradigms. So, which is where the telcos came in the mix. And I think one of the best things about cloud native is that it is breaking down very traditional silos that have existed, say between enterprise and cloud or how we solve for enterprises versus how we solve for telcos. And a lot of what I'm building or doing right now is about building common technology that can solve for each of these stakeholders in a consistent way. At the same time, we have to meet them where they are. So not all traffic of telcos is going to be HTTP HTTPS, and we have to build for that, just as an example. It has been an evolution. Even on my part, I went from building products to actually starting with the solution and building products backwards. And I'm still on the journey of learning how to do that as well.
[04:27] Richard: So I'm new to the cloud native space. And so I'm not entirely sure what edge cloud is in particular. Can you describe what that product is and how you ended up there?
[04:38] Prajakta: Let's talk a little bit about edge. When you say edge different people will probably have a different version of what edge is and a really simple way to think of edge is, it's everywhere. It's like we have this massive global distributed edge. And so the edge, if you're on premises in your data center, that's an edge. If you really think of it that way, or if you're actually sitting in a retail store, or if you're even, even when you have a cell phone, the device actually could be an edge. And then obviously you've got other edges. Like we have [05:08 our pops], the telcos have their network edge. We have third-party providers who have their edges. So I think very simply when we say edge cloud in Google cloud, what we really mean is, you know, you bring your workloads to the cloud. You need to have similar constructs and similar technologies and similar paradigms.
[05:28] Even if you don't deliver your workload in cloud. If you need it on premises, that's where we bring cloud to you. Or if you need it in an oil and gas location, that's where we bring it to you. Or if you need it in your data center, that's where we bring it to you. I think of it more like distributed cloud. This is a term that [05:43 Gartner] introduced. It's an insightful term because it is really about bringing cloud to wherever your workload needs to be, as opposed to forcing the workload to come to cloud. And so that's basically edge cloud. You bring Google cloud and you bring the related technologies paradigms. And then at the end of the day, you map these to the use cases and business value that you want to deliver to the customers.
[06:06] Richard: So you're describing sort of hybrid uses as well, not just that in data centers, but also to end-users. How do you manage consistency while doing this?
[06:16] Parajaka: I think that is one of the most interesting and challenging problems to solve. Now let's talk about all of the things right in the mix. So one is, you've got the customer premises. The second is you've got Google cloud. The third is you have our edge locations. The fourth is you have also other cloud providers in the mix because a lot of our customers are also multicloud. Managing consistency across this is why you see a lot of these cloud native technologies spring up in the first place. So for example, when you say manage consistency, as an end user, or as an enterprise, or even as a telco, what are you trying to do? You want to go and spin up your compute, whether it's containers, whether it's VMs, whatever you need in the same way. You want to apply policies in a same way and as much as possible.
[07:04] And this is still an evolving space, more at the service level than the infrastructure level. So instead of saying, apply my firewall here to this VM and that container, and then put these other hundred services. You should be able to see service A can talk to service B. It's a much easier thing for all of us and the admins to go grow. The third part of it is observability. So now you've deployed these workloads in all of these different locations, whether it's edge, whether it's your data center average, you need to be able to see what's happening to the traffic end to end. So I think observability, which includes monitoring automation and so on and so forth. I think that is [07:41 inaudible] we talk about consistency.
[07:44] And I think the last bit is I sort of alluded to that in observability, but it is automation, the more everything as code. So like configuration as code, security policies as code, that lends itself to being automated well. And the more you can get humans out of the loop, the lesser the errors and so on and so forth. So a lot of the consistency also is related to almost ways to make this thing automated. And now you ask how to do it? That is basically what all of us are trying to solve for it, which is why, again, I talk about the solution itself. Like what does consistency mean? If you look at somebody who's trying to go say migrate their workloads out of their data center, they may not be able to do it one shot. And so they take some of their key workloads or the ones that they can move fast. They move them to cloud. Some workloads may never move to cloud due to compliance and other reasons.
[08:38] So now you've got this workload running in two places that is where not everything in the world is containerized or is going to be containerized. Like there are different things that benefit for containerization and some workloads will never containerize. So the consistently, like, which is why, you know, Tzury and I often talk about service mesh. The good thing about a paradigm like service mesh is it as agnostic to the type of compute, which means I can talk about services that are based on VMs and services that are based on containers exactly the same way. And that is an example of consistency. When I apply my policies, whether the workload is in my data center, in my cloud, if I can say enable MTLS between service C and service B, that is a level of consistency. If I can easily orchestrate traffic routing. So for example, I introduced a new version of code or of my service send 99% of the old versions and send 1% traffic to the new version****so I can test it.
[09:37] If I can do it similarly, both in cloud and anywhere else, the workload is, other workloads are deployed. That is consistency. So there is not one thing that's going to deliver us consistency, but I think one overarching thing, or like just a tenet is the more you start upleveling where you apply policies and where you automate, the easier it is to make things consistent, which is why you see a lot more conversation around service mesh, or even previously Kubernetes and service mesh. And then to abstract out everything that's top of server lists and so on and so forth, but it is one of the most interesting and challenging problems to solve for.
[10:15] Richard: Thank You. That's an excellent explanation. I love how you go into service meshes and how they're really important for consistency. One of the questions I have is that you're presenting this rosy-eyed vision of like corporate projects, large projects, enterprise, telcos, doing all of this work and providing use cases to all their customers. This is committing to cloud native. It's about open source too. You mentioned Kubernedes briefly, where does open source fit in?
[10:39] Prajakta: You know, the way I have started looking at open source and the way I have started looking at even all of these technologies is what does it enable? If I'm an enterprise why should I care about open source? Like, what is the benefit to me, or if I bring in Kubernetes or service mesh, what is the benefit to me? So I'll give you an example of some of the interesting things that open source brings, and it's not open source for the sake of doing open source. So for example, like Istio, I'll just take the example of an open source service mesh or Kubernetes, which is an open source technology. First, the sheer number of different stakeholders who are building the technology generally leads to diverse inputs into building the technology. And so you will land up with a better product because you don't have a one-sided view of what the solution or this product should look like.
[11:29] I think that's one of the biggest benefits of open source. And we have seen that in the case of Kubernetes, which has now become the de facto standard for container orchestration. I think the second big reason to really look at open source seriously, and you know, there is open source and then there's open interfaces. So for example, you had a guest Anna who spoke about this product, we call traffic director. Now when we build traffic director, which is a control plane, it's a service mesh control plane. And it communicates with a proxy called Envoy in the data plane, the interface between those two, we adhere to the open interfaces, which means tomorrow if you don't like traffic director, you can swap it out for any other implementation. So one of the good things about open source and open interfaces is it gives you a choice.
[12:18] If I, as a customer, test something out and as long as I can swap it out for something else that preserves the same interface, it makes my life easy. And I can change my mind after I've deployed a technology. And I think the third thing is a lot of the open-source technology, especially in the cloud-native bucket, they are built for applications which need to be automated at scale. They are built for applications to scale inherently. Security is a first class citizen. It's not an afterthought where like I was involved in building some of the early load balancers in two thousands. Our consideration was not at all security to start with. It was like, okay, I have this hardware box. How many queries per second can it take? And how can it, how much can it scale out to the backend servers, then when the service started getting DDoS is when we were like, oh, we need to go add a DDoS defense and then so on and so forth.
[13:12] But one of the newer paradigms, like you look at mesh or Kubernetes, they treat security as a first class citizen, and there is more and more awareness around embedding security traffic management for the developers to automate, write code, a separation of boundaries. Like a lot of these cloud native technologies, they separate out like, you know, service mesh is a very good example, like applications from say the networking logic. So I think that cleanliness and hygiene of interfaces is something that is seen more and more in the cloud native products. So I think overall that is the reason to adopt open source. And, you know, if an organization decided to build something in-house. If you look at Kubernetes, it came out of board, like that board was a technology in Google, but taking it to the community, enrich that whole product, like it became 10X of what it was at Google.
[14:03] And I think you get the power of the community with open source. You're not going to be able to build such a big R&D team for every component you need. So I think the more you get plugged in, and one of the things, and this is specifically now from personal experience. The more you contribute back to open source, the more you can get out of it because you can then influence the direction. If you simply consume it, in some ways you benefit from what others are doing, but you never get to influence the roadmap. So it's just a two-way street, in some sense, like as you leverage open source as an enterprise or through our products, we, as well as the enterprises are contributing back in a bigger way, like, you'll see a lot of enterprises starting to contribute back to open source.
[14:45] Richard: I love that answer. Brilliant. Couldn't agree more. I mean, open source is what I live and breathe, and that's just spot on. At the beginning of this podcast and this conversation, we mentioned telcos, and you've written in our notes doc here, which is an awesome document. It's basically good enough in itself to publish at this point about telcos trying to create new revenue streams, using cloud native technologies and opportunities. I'm curious if you have any thoughts about revenue streams going towards the open source ecosystem and how that would work?
[15:16] Prajakta: So I have this very interesting and strange background where I keep switching between working with enterprises and telcos. And so I have also seen the thought process evolve over the last 10, 20 years. If you really look at what telcos are trying to do. If you look at maybe more recently, they essentially have three things to solve for, one is they have their own IT workloads. So these workloads are very similar to, you know, enterprise IT workloads where they need. For example, they're big users of our data and analytics product. So they need compute and they need storage and they have applications that serve their own ID. The second big chunk of almost use cases or products that the telcos have, is their core network, that's sort of there, you know, that's their lifeline. That's what actually drives their business. And these networks have existed way before any of us did.
[16:13] I mean, if you look at how far Indian bell go back, like they are a hundred plus years ago. So I think the second part is the network. And then the third part is where, think of your cell phone bill, right? Even if I gave you 5g on a cell phone, you're only going to pay so much more for your bill. You're not going to pay 10 X of what you pay for your monthly. So you need to deliver new experiences or you need to go and solve for new use cases to generate revenue streams. And these are sort of the three things that are top of mind for telcos. Now, if you look at Telco IT the journey to becoming cloud native is very similar to enterprise because the IT workloads look like enterprise workloads.
[16:54] With telco network. It's a different ballgame, in that a lot of these telco networks, for example, are VM based. Like, especially if you look at the 4g networks, a lot of these networks have existed for so long that making them cloud, we cannot just tell telcos go cloud native. It is not easy as that. We have to start maybe by deploying a greenfield side-by-side by converting some of these brownfields to greenfields and so on and so forth. And then I would say the most interesting bit, or at least the most interesting happening in recent times, is what we have done with telcos. So three, four years back, if you told people AT&T and Google cloud or AT&T and some of the other announcements or Google, and some of the other announcements we've done are happening. People wouldn't believe it because cloud providers and telcos did not partner so closely. What we have realized is there is a win-win to be created that with the beneficiaries being the enterprise customers.
[17:51] So for example, take the case of a retail store. With COVID everybody's budgets are slashed. The retailer wants to go deliver these amazing experiences where they need to bring you back to the stores. So now you enter the store and you find your app at a mannequin and it chops the look for you, and it gives you recommendations and so on and so forth to deliver this you need to run AI models, compute and so on and so forth. The retailer says I have no budget to run it inside the store. So what we did is we partnered with a leading telco and we essentially use their network edge. We built out the edge compute and placed it there. And we run all of these models close to the retailer there. So this became a new revenue stream for the telco and us. And at the same time, it solved a very critical problem for the retailers.
[18:39] So when you talk about these new revenue streams, it's generally about three things. You're either trying to make money for the end customer, or you're trying to save money for the end customer, or you're trying to deliver a new experience for the end customer, which helps them increase customer engagement as well as obviously make money eventually. So this is a very new and evolving partnership if you will, between telcos and cloud providers. And then it spans various industries like retail, finance, healthcare, you will see that these solutions are fairly vertical focused, because like I said before, like, what is the problem to solve right? In a retailer, they may be trying to go and deliver these experiences without CR CapEx, heavy, experienced, or CapEx, heavy investment, or somebody like a healthcare. Like if you take oil and gas, as an example, there are these remote oil and gas fields.
[19:31] Like there is no edge computing or cloud nearby. There is really no 5g connectivity then how can we bring it there? So that is another set of problems we are solving with them. And a lot of this is great because the underlying technologies used here are for example, Kubernetes or service mesh or 5g and edge just happens to be wherever you need to run the workload. But the end result is it solves something important for the enterprise. And then this is where the telcos can make money because they're delivering a service, not just 5g, but an actual end solution that solves a business problem. So that is why I say, like, if you start from there and go back, that is where the value is.
[20:13] Richard: That's very valuable for the end customers being the telcos and very valuable for enterprise. I try to ask a different question. Maybe I didn't ask clear enough, how do we have that money go back to open source itself? You mentioned we use Kubernedes to do this work, and that's great. It's wonderful to use the opensource ecosystem to leverage enterprise needs. What I'm curious, how do we have revenue streams going from enterprise and telcos back into the open source projects to make the entire system more sustainable and ultimately more useful to enterprise and telco users?
[20:44] Prakakta: I think first of all developers. So instead of only leveraging open source, it is important that we put developers because those are actually the most expensive resource. If you think of it and high quality developers who will keep the code base clean, who will at the same time, be inclusive and accept different trains of thoughts and so on and so forth. I think that is the first biggest investment anybody can make. Google has one of the largest contingents of developers working on Kubernetes. That's how we contribute back to that community. You will also see telcos increasingly doing that. Some of it is also because if you really want to influence something, you have to put your skin in the game. Otherwise you're going to be a passive consumer of it, which works to a certain extent. But if you really want to move the needle, you have to put skin in the game.
[21:33] And Google in general. I mean, it's nothing new. We've been committed to open source for a really long time. So I would say first as developers, like, you really need to put people who build the stuff. I think the second thing is the communities themselves. And you can see Kubernetes, for example, has a [telco21:48], just as an example, or they have these different stakes where essentially they are looking at it from a use case problem or a vertical problem, or like, who are we solving for lens? And that is where the money will come back. So as an example, let's say you want to go use Kubernetes to run the telco network. It's not perfected on telco networks. It's good enough, but it could be much better. And this is where telcos and us. We are investing time to go participate in these sites and then bring some of the use case ideas in the enhancements that are needed.
[22:21] And then obviously put developers and money where we are asking for features. Like we need to make sure we are also contributing back developers. So we'll build them out. It's true of telcos as well. They are newer to the game, but if you look at enterprise, telcos us, there's a lot of participation. It's like, if you just look at the curve, it's a very, like, it's like a hockey stick in terms of participation in open source. I think those are the two biggest ways. And then one of the biggest things that help our technology evolve is real-world validation. I think the third thing is sort of facing back lessons learned into the community. It's not so much about monetization or money. It is about evolving the product. And so saying I used Kubernetes and here are the three features I'd like to see whether it's for enterprise or telco. I think all of us need to do more of that, but that really helps the product itself or the solution.
[23:13] Richard: You know I have to agree with the developers aspect because we have a guest coming on next week, Dan Laurenec, he's a Googler as well. And he's working on supply chain security side. And I think when you brought up earlier that keeping the code clean and all these other things along those lines, it's very important because I mean, let's just say a hundred percent of the fortune 500 use open source in some kind of capacity. All of those would be vulnerable if we didn't have a supply chain security and very trained eyes looking at the code. And that is what Google and Microsoft and all the other Fang companies are hiring to kind of look at, but it doesn't end there. And it just needs to continually be invested in because each day millions of lines of code are being added to various, high used open source projects.
[24:11] Tzury: Prajakta****you mentioned everything is code that principle. But I'm thinking if you work at Google cloud, the actual cloud, you don't use a cloud. You actually working in a dentist center, it's a building, they're hardware, servers, cables, routers, electricity, generators, backup, and all of that. So, first of all, we always need to remember that the fascinating tech that we call public cloud that's available for us, it's thanks to lots of people who are doing really hard work to make it so smooth, automated, and simply, you know, manipulated and managed by few lines of code, a Terraform that we just put together and all of a sudden you have superpower computing that you can calculate anything you want.
[25:11] But that led me to think, and if I'm by mistake uncovering like Google X project or whatever secrets cut me off. So imagine that a code that will actually run army of robots that will simply keep building and expanding the cloud manufactory more storage and plugging in more compute and taking care of everything, instead of having so much manual labor. I think that the real future cloud would be fully automated cloud bottom-up from the infrastructure level itself. What do you think about this?
[25:57] Prajakta: You should patent it before anybody gets to this. So that is an extremely interesting thought. Like I was just reading this article on Elon Musk, where he's staying in this small little house built by a company called Boxable. Just got, it's not even out as a product yet. And these are these modular units, like 3, 400 square feet with a kitchen and bed and everything built in there. And if you could actually put any of those together, like with some modifications to build a whole house. In general, if you really think of how we've built data centers, that is the way we have scaled as well, like there is a recipe of using basically just genetic cockpit, where the value is actually in the software. And as much as you can simplify and make your hardware consistent and start moving the value into software, the easier it is to build what you are saying.
[26:51] When we have a pretty massive footprint of our regions and our edges and so on and so forth, that is how we have scale. Like it is really infrastructure in a box, it's just not visible to people, but that is how we scale. So when we bring up a new data center, there is essentially that same hardware that goes in, there are these massive software systems that actually bring the value. So to your point, it is true that some of this could be packaged as a product. Like you could literally have data center in a box, which you can make more consistent with the use of essentially, you know, like maybe commodity hardware or more generic hardware, which is quite open interfaces and put the value in the software and also modular software where you can plug and play things out, depending on, you know, what it is that you're trying to accomplish.
[27:38] I still think though Tzury like, you know, imagine you launch this amazing product. The world is what it is. Like people will still have their data centers and people will have these new, amazing data centers and people will still have their edge locations on their devices. So some of the problem statements, you and I will still need to sit and solve, which is the world is just heterogeneous by design. And I think we still have to solve for that world. But to your point, this is basically what you're talking about is consistency at a solution level. Like instead of just making, you know, some genetic design for say servers and so on and so forth, you're saying can we actually package a whole data center or a region in a box? And I think that that's a great idea. We do it personally for our own data centers, but Tzury, I think that's your next product right there.
[28:28] Tzury: Thank you. Probably we should talk a little bit more about the massive contribution if opensource was mentioned, massive contribution Google has made, is making to almost everything we call today, cloud native, almost everything that we use within the cloud native. And one of the, I would say the enigmatic terms that was mentioned by Anna and also popped in your notes, is the server less GRPC service mesh. So I'm thinking service mesh server less, GRPC. Can you elaborate about this a bit?
[29:09] Prajakta: Let's maybe talk about why they came about. So it was sort of an evolution if you think of it. So GRPC came from a Google internal technology called stubby, like that was the basis of GRPC. Then we've got Kubernetes which came out of org, service mesh like we've been doing, we didn't call it service mesh, but that's been the construct used internally. And then obviously serverless. We are very strongly involved in the [29:34 inaudible] of other communities. So each of these, so let's maybe trace the evolution. It didn't all happen at once. The first problem Google really solved was about building load balancers at scale, which means when I worked on load balances, it was a box. And if you needed more horsepower, you bought a bigger box. That's how it works. They said, that's not the way to do load balancing. And they tried to do it. And it scale for search in Gmail and so on and so forth.
[30:01] So they essentially put this commodity servers like humongous scale across the globe, and then they build software that could balance using those servers. And so they built this massive global distributed load balancing. So that was the first innovation if you will, that happen, then there was a problem of, you know, how do you efficiently do message to message communication. And that led to stubby and which eventually evolved into a more standardized GRPC. Then Kubernetes came out of being able to schedule jobs at scale, like that's where it started. And then obviously this predated containers and so on and so forth. And that evolved into orchestrating compute, which is like containers at scale.
[30:42] So Kubernetes was about containers, but containers and Kubernetes definitely brought about the notion of a service, but it didn't necessarily have everything, which is why now you see there's a very strong correlation between Kubernetes and service mesh, but service mesh itself is just a paradigm if you think of it, right, what it's really saying is let me bring together some compute. It doesn't matter what it is, whether it's VMs, whether it's containers or so on and so forth. Let me bring a set of these together. Let's call this a service. And when I call this a service, all the things that I need to do for networking, whether it is figuring out how to route traffic or whether it is how to do like, you know, enforce MTLS, let me not put it in the application. Let me go put a separate proxy there and let me go and delegate that responsibility to the proxy. And the goal is to separate application from the networking.
[31:36] And then this just gives you a service fabric and the brains of it as a service mesh control plane. To the service mesh control plane or the service mesh came about because you wanted to, or you as an end user, should be able to apply policies at the service level and in a very simple manner without having to touch your code. So I should be able to say, when service A goes to service B, like I said before, maybe send 99% to version A and 1% version B or enforce MTLS between all services or for every packet that goes out, go put it in a dashboard and let me see the end-to-end flow. That is why service mesh was built. It's sort of an overlay on top of Kubernetes.
[32:15] Now there is one misconception people have about service mesh, that it's only for containerized workloads, actually, that is not true. It includes VMs. It should be able to work for serverless. It should be able to work for bare metal. It is that one very key misconception that we need to clear up because the world is all of this and a big chunk of the world is VM based. And some of it will always stay in VMs. So I think that is how service mesh came about. And then people said like, do I really need to know, not all people, but a subset of the end-user said, do I really need to see the guts of how this product is and so on and so forth or how the solution is? That's where serverless came about, like to abstract away all of this stuff away from the end-user, like as a developer, I want to run my application. Why do I need to care about these hundred things? So that is where serverless came about.
[33:06] In a way people will use one or more of these. It's not like one thing will solve everything. Or some people may just use one of these technologies, but that was sort of our evolution path. Now, eventually the hope is that in all of this, you know, serverless as a paradigm is probably the [33:25 mental model] we need for most, which is if I can just state policies in terms of services, if I have to think as little as I can with networking, that is when this thing becomes easy for me to use like cloud and anybody's cloud. I just don't say Google cloud, but anybody's cloud, there is ample opportunity to actually simplify it, like we have not reached the simplified cloud yet. We still need to make that happen. So that is where technologies like serverless are extremely important.
[33:54] And also being able to code, like I always spoke about use cases. So in terms of serverless, there's a variety of used cases, like you want to do processing at the edge, let's say, I want to like, you know, Tzury in your world, like in the world of security, if I want to do something to the traffic and it is very custom, I have multiple ways to do it. I could spin up a serverless or a function, I could actually leverage [34:18 inaudible] of a service mesh, or maybe I build out a site process, which the load balancer points to. So I think depending on the level of sophistication, the solution needs, or the level of sophistication the developers of the enterprise have, we have to give choice to customers. But again, like, you know, focusing back on what is the problem we are trying to solve and what is the best stool to go solve. It means that we will require all of these technologies. It's not like one makes the others obsolete.
[34:49] Tzury: Talking about, you know, VM, et cetera. What is the oldest service in Google that you know, that's still running the same way it was running at the beginning and was now migrated to any fancy schmancy new tech that you can share with us.
[35:06] Prajakta: You know, it is very interesting that this is how Google runs, like Google does run in containers. Google does have a service mesh. Google does do serverless. It's just that we didn't call those things that, like we called it Borg instead of Kubernetes. So in a way, if you can think of it, the services are running like they used to, because that is where we started from. And it was not that we set out to invent Kubernetes or anything like that. It was like there was a genuine problem of scale, where, how do you schedule something at scale, our own jobs at scale. And so in a way Tzury you can think that almost all services, obviously there's evolution, but they are running with similar paradigms from start to now, like in fact, this paradigms were invented to solve the problems we hit as we drove these services. So when you see our load balancers, it is how we run our own load balancing. Like that's how you scale your request for search, or when you look at containers, it's boring. So in some ways, every service is running the old way. If you can think of it that way.
[36:07 ] Tzury: So is [36:08 inaudible] use Kubernetes, what is more in use between Google itself?
[36:12] Prajakta: Yeah, it is basically the same set of people now who are doing both of those things. So it is basically the same tech. So like, for example, if you look at, I'll give you another example since Kubernetes, we spoke about it now. So if you look at say something like a service mesh, if you look at something like GRPC, I think the key thing to realize is we often mix the paradigm with the implementation. So Kubernetes, the implementation that is outside, that came from us opensourcing board. So it is one and the same thing. With service mesh, you have a lot of products in the industry. Like we ourselves have two very interesting products. One is a [36:50 steel] and the managed version, which is called ESM under service mesh. And then there's a slightly different variant called traffic director, which pulls in all of our global systems.
[37:00] So in a way like a lot of these systems, at least the tech that we adopted came from what we do internally. So it is basically one in the same thing. Like traffic director uses the same systems we use for a bunch of internal Google services. The flip side of this I wanted to call out though, is we've also been careful to award the not invented here syndrome, which means, and I think you and Anna spoke extensively about Envoy proxy. When we started off, there was discussion on whether we should go build a proxy like that. And when there was an evaluation done by several people, like several key people at Google, the conclusion was that Lyft built out something, Lyft in my client, build out something really interesting. There is no reason for Google to reinvent it. So we're also being mindful that it's not just about our tech, it's about the best tech that we can use internally as well as enough cloud. So that's a good example of us using an external tech. So it is whatever is the most optimal way to do it. And then going and using that.
[38:05] Tzury: So for example traffic director is built on top of Envoy. Is that something that you can share?
[38:10] Prajakta: Yes so this mesh basically has two parts to it. There is essentially your data plane where you run the proxy, like next to your application, you run the Envoy proxy and then you have applications and you control the proxy through a brain, which is what traffic director is. So we do an and the protocol between the control plane for traffic director control plane, and the Envoy proxy is called like XDS. And now there's a new variant of it. We stick to the open-source variant of it. We don't make any modifications. We don't fall off, which means that we are supporting exactly the same industry standard protocol between the control plane and the data plane, and yes, like our key sort of deployments are all with Envoy proxy, the traffic director, which is the brain itself, uses a bunch of Google systems. But since it uses the open protocols to [38:58 inaudible] data pin, you could swap it out or swap it back in because the others also use the same protocol.
[39:05] Tzury: How does it feel Prajakta being the greatest in one end and still the underdog on another? Let me explain what I mean, you and I and many people in this world know that the Google cloud actually from tech perspective has degraded so [39:23 inaudible]. Google cloud is by far the most sophisticated, intelligent, amazing, faster than any piece of technology, but still there are others more popular. I'm not even taking into account all the contribution and the birthplace of all those technology, such as GRPC, service mesh and Kubernetes and so on. To know that what you do is by far on top of the game. But from, I would say commercial side is still the underdog. How do you go on your day-to-day with this? Is it frustration, excitement? How does this affect your day-to-day?
[40:14] Prajakta: That's an interesting question. So maybe I give a slightly different perspective. First let's talk about like, you know, as always, since this is focused on the end customers, I would say most of the world's workloads haven't yet moved to the cloud. Like that's actually the very sobering reality of things. The opportunity ahead of us is huge, like I would start from there. I think the second part of it is it is not truly about being number one or number two, or number three, the question is, are your customers coming and using the cloud. Like what are the problems you're really solving for them? Why are you the best suited to solve for them? I think two or three very simple things in there. We've had, especially if you've been tracking, since Thomas came on board, there's a very strong focus on solutions based approach to solving things.
[41:06] So Tzury, when you asked me what, number one, or number two, or number three, it is almost [41:11 material] to this discussion when you put this big picture, like herere in front of you. The question is, where are the workloads? The workloads are in cloud? The workloads are on premises that folks are asking us to put, bring cloud to wherever they are, whether it's in their data centers, whether it's in their edges. A lot of our focus is on that. And it is super exciting because this is sort of the world we've been actually advocating for. Like, not just now, like since, you know, you've heard the anthers announcements, you've heard our work around edge cloud, you'll hear a lot more about it in the coming months, but you've seen all these announcements with telcos. I think it is that, it is very important.
[41:50] You know, when you are running a marathon, if you take the point snapshot at the beginning, it's super not useful, like at the end who wins and the market is big enough. I think the right thing to focus on at this point of time is, are we really solving customer problems? And that is why some of the things we are focusing on. One is obviously to go and bring technology. And from the starting point of the customer, like, you know, now there's a new project called Lantos for VMs coming out. The reason for that is a lot of workloads are VM-based. We cannot ask people to containerize overnight. There's a good realization of that and solving for that. That's an example.
[42:29] The second thing is a very good understanding of who your customers are. Our customers are enterprise and industry verticals. And we have a new set of customers with telcos. I think the third thing is to take the industry along, which means partnerships. We are spending a lot more time on partnerships like if you see our telco announcements, we are partnering with all kinds of ISB. We're partnering with telcos themselves. If you look at enterprise there is a huge amount of emphasis on that. And I think the last bit is take all of the cool tech we have, which you brought up as well, but it's not really about technology. Like what is the problem we are solving? Which means we have blueprints for various solutions. All our products work well together, there is simplicity in how you can configure them. There is visibility into what's happening when you deploy them.
[43:17] I think those are sort of top of mind. So some of the things you brought up are not even crossing our minds because really the problems you have solved are just a tip of the iceberg. I think the opportunity lies ahead. And that is where basically we are focusing on and it is super exciting. Like daily, you wake up and there is a, you know, there's a new problem you hear from the customer or at the same time you also hear about, you know, how do you say, like the [43:41 inaudible] went down or essentially you help them deliver an experience which doubled their customer engagement. Those are the real wins actually. Those are the things to track. And like the rest will follow is how we look at it
[43:55] Tzury: It is indeed****a marathon. And we barely even started.
[44:00] Richard: I wish we could talk forever about this. I mean, this is like a marathon. Listen to this conversation I'm learning so much. It's really great to hear. I wish I could talk to you about Aurora, you founded for women and leadership in Google cloud. I wish we could talk about your work in the DEI committee there. I wish we could talk about the 12 Peyton's you have. Tzury when she said get a Peyton, she was serious. She was mentioning something from her own experience. Unfortunately, we do have a time limit on this podcast. Prajakta, for people who are interested in hearing more about you and your awesome work, where can they find you on the web?
[44:31] Prajakta: I think the easiest way to find me is on LinkedIn. That's where I'm most active. I think for the rest, a lot of it is through the announcements that you see from Google cloud in the areas of edge, telco, some of the past products. I mean, those are great products to track as well, and from the cloud network insight and people can always reach out to me on LinkedIn and Twitter. That's the best place to find me.
[44:54] Richard: What's your handle on Twitter?
[44:56] Prajakta: It is Prajakta plus.
[44:59] Richard: Thank you so much that Prajakta. It has been great having you on. Thank you so much, looking forward to having you on future podcasts, if we can manage that because you have so much to say, and it's just wonderful hearing about what you're doing there at Google. Thanks again. Thank
[45:16] Prajakta: Thank you, Tzury, Justin and Richard. It's been a pleasure being here.
[45:18] Tzury: Thank you Prajakta.
[45:20] Justin: Thank you.