Hello there. Thank you for tuning in for a brand new episode of our SDxCentral Big Interview series. I’m Dan Loosmore, the CEO of SDxCentral and DatacenterDynamics. Delighted to be back in our virtual studio here, joined by Vishal Shukla, the CEO and co-founder at Aviz Networks.
Vishal, I was lucky enough to meet you in person at our compute event in New York City a little bit earlier in the year. Thanks ever so much for joining me in the studio today.
Thank you, Dan. It’s a pleasure to join you here in this virtual event. And it’s always a pleasure to meet you and your team.
Thank you very much. We appreciate the support, especially now that SDxCentral is part of our platform here at InfraX Media.
### Vishal’s Background and Aviz Networks’ Mission [00:00:54]

Regular SDxCentral readers will be familiar with the Aviz proposition. But for those maybe less familiar, Vishal, maybe you can give a little intro, firstly to your own career journey, because I always think that’s an interesting part of this interview series, and secondly to the proposition at Aviz Networks.
Personally, I’ve been a networking engineer from the start of my career. I started as an engineer and then moved into product management, marketing, overlay sales, and pretty much all the different verticals of a networking industry, being in an ASIC company as well as the switch companies. So I have a good understanding of the markets, the players, and how it’s evolving.
During this particular journey, we basically saw a pattern in the market on how it’s moving forward, how AI and the bandwidth requirement is creating an opportunity and a need for something on how the networking should be done differently. And that’s where we formed Aviz Networks. The idea was pretty simple: the standardization has to be on the operations layer, on a data layer, because the infrastructure itself is changing very fast. Your port speeds and feeds are changing very fast, your ASIC life cycle is becoming very fast, GPUs, the computes are changing very fast, the bandwidth requirements are increasing 10x every couple of years. So definitely, you needed a new kind of network, and that’s where we formed Aviz Network.
### Defining an AI-First Network [00:02:37]

Thanks very much. We’ve had our editorial team do an awful lot of content recently on the network being robust and sexy again in the age of AI and having almost a hockey stick-like demand curve to deal with. Maybe in your own terms, you can talk us through a little bit what an AI-first network actually means and how you’re meeting some of the pressures on the rack-level infrastructure in the age of AI.
Look, I mean, before I get into AI, let’s just quickly look at what is happening in the industry. It’s not to do about networking; it’s actually the things which are influencing the networking. So look at it in three different dimensions.
The first one is actually the definition of compute itself is changing. It used to be CPUs, but now this is GPU and DPU, and everything is coming at a very light speed. Every three months, you see NVIDIA or AMD bringing something new. So the compute for which you need networking, that itself is changing in the DNA level. That’s the first thing.
The second thing is the requirement of bandwidth is also changing at a high speed. Think about it this way: five years back, if you will go to amazon.com or ebay.com, looking at a product to buy, the product description used to be either in text or it will have a very low-pixelated picture for the description of that product. Now look at today. You have videos, and you have tons of videos, and you have the people who are reviewing the product, they put their own videos. And then you have AI on top of it generating more videos. So it’s basically bandwidth, which has gone 10x or 100x in past few years. For that, you need a new network, and this has nothing to do with AI, by the way, it’s just that more quality content.
And the third one is actually AI itself, post-LLM. I’m not talking about the old AIOps platforms and everything. I’m talking about the technology which came after 2022 and 2023. Actually, every vertical is looking for how do they use the efficiencies of post-LLM era. So these three things are essentially pushing networking to be AI-native, AI-driven. And that’s where you see that hockey curve which you talked about. And it’s essentially on two dimensions from the business side: creating networks for the AI workloads and then putting AI in networks. And that’s what we do in Aviz. All our products are essentially aligned with these two dimensions: creating the networks for AI and then making AI for the existing networks.
### The Hyperscaler Approach to Networking [00:05:35]

If you really look at it, the technologies which has become the enterprise technologies have always been perfected by some of the key players in the industry, and in networking, these are hyperscalers. The top five to seven deployments, they basically deploy those technologies, they perfect it, and then it becomes so repeatable enough that it becomes a recipe for the enterprise. So this vendor-agnostic and multi-vendor deployments and everything is essentially at that stage right now where it has proven its worth, not only how it gets into technically deployed, but also from the business point of view.
So what really had hyperscalers get going on to that?
1. **Embrace technology at a very fast pace.** If let’s say the port speeds are going from 10 to 25 gig to 100 gig to 400, 800, they should be able to embrace it very fast and just rip and replace without thinking about how it’s going to impact the usage or bring down the cloud or networking.
2. **Don’t spike the budget.** You cannot just say, “I’m going from 100 gig to 400, now I need 4x the CAPEX budget.” That’s a no-go.
3. **Don’t disrupt the application layer.** You cannot tell the application developer that now you have to do it differently because I changed the infrastructure underneath.
So these were the three challenges which hyperscalers took care of. Now, how can you do that? The way to do it is that you create an operating layer which, on the south, understands the variability and the speed of change of the infrastructure. It understands different kinds of vendors, different kind of network operating systems. And in the north side, the API remains always the same. That’s what the hyperscalers have done, and that’s where Aviz has taken its inspiration. Essentially, what we have done in Aviz is we have replicated what the hyperscalers have done, but in a way that is meant for enterprise, meant to be consumed as a solution rather than having a 100-people team on how Microsoft, Google, Facebook, these guys do it. And there’s a good reason why they do it like this, but enterprises don’t want to have that big team. That’s where Aviz comes in, gives them the same experience, same business outcome, without spending that much.
### The Role of Community SONiC [00:11:13]

I’m going to talk this thing in two different dimensions. The first one is what we have learned from history, and the second one is what is happening in the industry right now. So let’s look at the history. We had Sun Microsystems with Sun Solaris, we had Windows, we had IBM mainframes. Those kind of technologies, vertically integrated stack, all application running in their own systems. That was 25, 30 years back. Then Linux came in, and it standardized the operating systems. And essentially, at that point, nobody knew open source. Red Hat took the charge; they kind of made it as a B2B kind of an operating system. And then the application economy was born. 15 years back, it was all about applications which run on Linux. And then after the application economy, today we are in the data economy, data and AI economy. So you see how this curve went. It standardized on the operating system level layer, and then the application economy was born, and then the data economy was born.
That is absolutely what is happening in the networking right now. We had a lot of different vertical, good solutions. You have Cisco solutions, you have Juniper solution, Arista solution, works perfect, no problem. But then when you start talking about standardization and consuming the technology at a light speed where the compute is changing, bandwidth is increasing, and AI is all over the place, you have to have standardization at some place. And taking the cue from the history, that standardization has to happen on the operating system level. And actually, the hyperscalers saw it eight years back, and they started basically proving this thing. And from the past 8 to 10 years, this particular standardization, exactly like Linux, has been proven by the hyperscalers. That’s in production. I’m sure that it is hitting SONiC somewhere in some of the clouds because these clouds are running on SONiC.
So essentially, that standardization has given birth to companies like ourselves, who basically now are making sure that this standardization is sticky enough and gives the experience which is exactly same as those vertical stacks. We are filling up that gap, not much like a Red Hat Enterprise Linux, but more like giving the same experience as Red Hat but keeping it community. Because we are not in 2002, this is 2024. People understand open source now as compared to they used to understand earlier. So where the thing is, we want to take it to the same level, which is application economy and then the data and the AI. And that’s where AI in networks comes into the picture. So it’s actually following the same thing which compute has followed with a great support from the hyperscalers that they have proven it, and then you have tier-two enterprises as who are our customers essentially deploying it and proving it that it is ready.
### Enhancing Network Observability and Management [00:15:36]

Thank you. This is a very important question in today’s age. As the enterprises are embracing the change in the infrastructure at light speed, the next disruption is, of course, in the way they monitor it, the way they observe it, and the way they manage it.
And we have a well-thought-through thought process here. Essentially, if you see today, the way the customers do their observability, monitoring, and management is a click-based operation. So we call it “ClickOps,” right? So you basically click on a dashboard, then you go to a second dashboard, click again, third dashboard, and hopefully you will get the information after flipping through some dashboards. And sometimes, you will probably not get the information. But how many clicks you can do in a second? Maybe two or three, but then you need time to do the observation on the dashboard itself. This kind of thing used to work when you have a vertically integrated, very siloed kind of operations and everything. But in the age of AI, where you have multiple vendors giving their best technology out, this is not practical.
So the way we are changing it is essentially bringing the LLM in front of it. We’re asking the question and then letting the LLM and the AI agents sitting there break down your questions into 10 different, I would say, “ClickOps” kind of equivalent, and then get the data from those 10 different dashboards equivalent, collate the data, and then summarize it, and you basically get the answer.
So essentially, the way observability and monitoring is changing is that it is essentially becoming more conversational using AI. And you don’t have to really go into the dashboards and everything; you can actually ask the question, and the dashboard can come to you, and you can actually create the dashboard on the fly, which is essentially generating a pie chart, graph chart, line chart, or a honeycomb chart or whatever. And LLMs are good in generating the artifacts, so it’s essentially that’s what it is.
### The Future: Standardization and AI in the Stack [00:18:26]

Networking is a very conservative kind of a thing. It’s very sticky. Once the network gets deployed, nobody wants to touch it, for all good reasons. So it’s not like a security application kind of an industry where in every year there is something new coming in and customers are using something new.
So I would say the base infrastructure layer, the technology will remain same. However, it will keep on refreshing in terms of better bandwidth, better AI protocols, and everything. But where it will go towards is that this AI will go up to the stack. So standardization on the layer of operating system, that will happen for sure, and that’s why we are a SONiC-first company. We do support other operating systems, but we believe that the future is about the standardization of operating system.
And then the next layer where the innovation is going to happen is the standardization of data. So irrespective of what kind of a vendor you are using, what kind of a network endpoint you have, or what kind of a compute endpoint you have, the network will be standardized in the form of the normalized data so that you can leverage AI to manage your network, monitor your network, and observe your network.
So right now, the standardization on the operating system is going on. The future is the standardization on the AI as well as on the data. And the product set which we have at Aviz Network, like I mentioned, they are in two different dimensions: creating networks for the AI, which is actually standardization on the operating system, and the second dimension is AI for networks, that’s essentially standardization of data and AI across your existing networks. So we are positioned to actually be ready for that disruption as it happens in the next few years to essentially create a single AI for diverse network endpoints and diverse vendors who basically are our partners, and essentially giving customers a single AI which can take care of their network. So that’s where it is positioned towards, and that’s what we are going to provide.