Exciting Announcement! In celebration of launching our AI Certification, we’re thrilled to offer a 50% discount exclusively. Seize this unique chance—don’t let it slip by!

Categories
All

ONE Data Lake & AWS S3 – Enhancing data Management and Analytics – Part 2

In February, we introduced the ONE Data Lake as part of our ONES 2.1 release, highlighting its integration capabilities with Splunk and AWS. In this blog post, we’ll delve into how the Data Lake integrates specifically with the S3 bucket of AWS.

A data lake functions as a centralized repository designed to store vast amounts of structured, semi-structured, and unstructured data on a large scale. These repositories are typically constructed using scalable, distributed, cloud-based storage systems such as Amazon S3, Azure Data Lake Storage, or Google Cloud Storage. A key advantage of a data lake is its ability to manage large volumes of data from various sources, providing a unified storage solution that facilitates data exploration, analytics, and informed decision-making.

Aviz ONE-Data Lake acts as a platform that enables the migration of on-premises network data to cloud storage. It includes metrics that capture operational data across the network’s control plane, data plane, system, platform, and traffic. As an enhanced version of the Aviz Open Networking Enterprise Suite (ONES), ONE-Data Lake stores the metrics previously used in ONES in the cloud.

Why AWS S3?

Amazon S3 (Simple Storage Service) is often used as a core component of a data lake architecture, where it stores structured, semi-structured, and unstructured data. This enables comprehensive data analytics and exploration across diverse data sources. S3 is widely used for several reasons:
S3 integrates seamlessly with a wide range of AWS services and third-party tools, significantly enhancing data processing, analytics, and machine learning workflows. This seamless integration allows for efficient data ingestion, real-time analytics, advanced data processing, and robust machine learning model training and deployment, creating a powerful and cohesive ecosystem for comprehensive data management and analysis.
S3 is engineered for complete durability, ensuring that your data is exceptionally safe and consistently accessible. This level of durability is achieved through advanced data replication across multiple geographically dispersed locations, providing robust protection against data loss and guaranteeing high availability.
S3 offers comprehensive security and compliance capabilities, providing a robust framework for safeguarding data and ensuring regulatory adherence. This includes advanced data encryption, both at rest and in transit, ensuring that sensitive information remains protected throughout its lifecycle. Additionally, S3 provides granular access management tools, such as AWS Identity and Access Management (IAM), bucket policies, and access control lists (ACLs), allowing fine-tuned control over who can access and modify data. These features, combined with compliance certifications for various industry standards (such as GDPR, HIPAA, and SOC), make S3 a secure and reliable choice for data storage in highly regulated environments.
S3’s capability to handle virtually unlimited amounts of data makes it an unparalleled choice for building and maintaining expansive data lakes that require storing massive volumes of information. This scalability empowers organizations to seamlessly scale their storage needs without upfront investments in infrastructure, accommodating growing data demands effortlessly. This capability is crucial for enterprises seeking to centralize and manage diverse data types, enabling advanced analytics, machine learning, and other data-driven initiatives with agility and reliability.
S3 provides flexible pricing models and a variety of storage classes to optimize costs based on data access patterns. Users can take advantage of storage classes like S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, S3 One Zone-IA, and S3 Glacier to manage expenses efficiently.
S3 offers robust data management capabilities, including versioning, lifecycle policies, and replication, which streamline data governance and archival processes. These features ensure data integrity, compliance, and resilience across various use cases, from regulatory compliance to disaster recovery planning. This capability empowers businesses to unlock the full potential of their data assets, supporting diverse applications such as predictive analytics, business intelligence, and real-time reporting with ease and efficiency.
S3’s robust features, including cross-region replication and lifecycle policies, establish it as an exceptional solution for disaster recovery strategies, ensuring data redundancy and resilience. Furthermore, S3’s lifecycle policies enable automated management of data throughout its lifecycle, facilitating seamless transitions between storage tiers and automated deletion of outdated or unnecessary data. Together, these features make S3 a reliable backup solution that enhances data durability and availability, providing organizations with peace of mind knowing their critical data is securely stored and accessible even in unforeseen circumstances.

Integrating S3 with ONES:

Steps involved to integrate the S3 cloud service with ONES,
To integrate the S3 service with ONES, follow these steps:
By accurately providing these details, you can effectively configure and integrate the S3 service with ONES, facilitating smooth metric collection and analysis.
Figure 1: Cloud Instance configuration page in ONES
Figure 2: Instance created and ready for data streaming
The cloud instance created within ONES offers several management options to enhance user experience and sustainability. Users can update the integration settings, pause and resume metric uploads to the cloud, and delete the created integration when needed. These features make it easy for users to maintain and manage their cloud endpoint integrations effectively.
Figure 3 : Updating the integration details
Figure 4: Option to pause and resume the metric streaming to cloud
Figure 5: Option to delete the integration created
The end user has the flexibility to select which metrics from their network monitored by ONES should be uploaded to the designated cloud service. This ONES 2.1 release supports various metrics, including Traffic Statistics, ASIC Capacity, Device Health, and Inventory. Administrators can choose and deselect metrics from the available list within these categories according to their preferences.
Figure 6 : Multiple options available for metric update on cloud
The metric update is not limited to any particular hardware or network operating system (NOS). ONE-Data Lake’s data collection capability extends across various network operating systems, including Cisco NX-OS, Arista AOS, SONiC, and Non-SONiC. Data streaming occurs via the gnmi process on SONiC-supported devices and through SNMP on OS from other vendors.
Figure 7: ONES inventory showing multiple vendor devices streaming

S3 Analytical capabilities:

Analyzing data stored in an S3 bucket can be accomplished through various methods, each leveraging different AWS services and tools. Here are some key methods:

AWS Athena:

Description: A serverless interactive query service that allows you to run SQL queries directly against data stored in S3.

Use Case: Ad-hoc querying, data exploration, and reporting.

Example: Querying log files, CSVs, JSON, or Parquet files stored in S3 without setting up a database.

Figure 8 - Data stored in S3 bucket in json format

AWS Glue:

Description: A managed ETL (Extract, Transform, Load) service that helps prepare and transform data for analytics.

Use Case: Data preparation, cleaning, and transformation.

Example: Cleaning raw data stored in S3 and transforming it into a more structured format for analysis.

Figure 9 - Pie Chart in S3 representing the data from different NOS vendors

AWS SageMaker:

Description: A fully managed service for building, training, and deploying machine learning models.

Use Case: Machine learning and predictive analytics.

Example: Training machine learning models using large datasets stored in S3 and deploying them for inference.

Third-Party Tools:

Description: Numerous third-party tools integrate with S3 to provide additional analytical capabilities.

Use Case: Specialized data analysis, data science, and machine learning.

Example:Using tools like Databricks, Snowflake, or Domo to analyze and visualize data stored in S3.

Custom Applications:

Description: Developing custom applications or scripts that use AWS SDKs to interact with S3.

Use Case: Tailored data processing and analysis.

Example: Writing Python scripts using the Boto3 library to process data in S3 and generate reports.

Conclusion:

Aviz ONE-Data Lake serves as the cloud-native iteration of ONES, facilitating the storage of network data in cloud repositories. It operates agnostically across various cloud platforms and facilitates data streaming from major network device manufacturers like Dell, Mellanox, Arista, and Cisco. Network administrators retain flexibility to define which metrics are transferred to the cloud endpoint, ensuring customized control over the data storage process.

Unlock the ONE-Data Lake experience— schedule a demo on your preferred date, and let us show you how it’s done!

Categories
All

Aviz Welcomes Nick Lippis to Our Advisory Board

AI is revolutionizing every sector, and at Aviz, we’re pioneering the transformation of enterprise networking with AI-driven solutions. We’re thrilled to announce that Nick Lippis, co-founder and co-chair of ONUG, and the producer of The AI Networking Summit has joined our Advisory Board, bringing his vast expertise to further our mission of ‘AI for Networks. Networks for AI.

Transforming Enterprise Networking with AI

Aviz Networks is leading the way in AI-driven networking solutions, focusing on Choices, Control, and Cost Savings. Our vendor-agnostic AI Networking stack allows customers to choose their hardware and control their network infrastructure, delivering substantial cost savings and a strong ROI.
Nick Lippis’s addition to our Advisory Board will strengthen our strategic direction and our commitment to leading the AI transformation in networking.
Categories
All

ONE Data Lake & Splunk: Revolutionizing Network Data Analytics – Part 1

In February, we introduced the ONE Data Lake as part of our ONES 2.1 release, highlighting its integration capabilities with Splunk and AWS. In this blog post, we’ll delve into how the Data Lake integrates specifically with Splunk.

A data lake serves as a centralized storage facility capable of accommodating large quantities of structured, semi-structured, and unstructured data on a significant scale.These are typically built using scalable distributed cloud-based storage systems, such as Amazon S3, Azure Data Lake Storage, or Google Cloud Storage.

A pivotal benefit of a data lake lies in its capacity to handle substantial amounts of data from diverse origins, offering a cohesive storage solution conducive to data exploration, analytics, and informed decision-making processes.

Aviz ONE-Data Lake functions as a platform facilitating the migration of on-premises network data to cloud storage. It encompasses metrics that capture operational data across the network’s control plane, data plane, system, platform, and traffic. Serving as an upgraded iteration of Aviz Open Networking Enterprise Suite (ONES), ONE-Data Lake stores the metrics previously utilized in ONES onto the cloud.

Why Splunk?

Splunk is highly significant for organizations across diverse industries for multiple reasons:
Splunk empowers organizations to obtain immediate insights from their operational data, facilitating the monitoring of system and application health and performance. This capability aids in promptly identifying and addressing issues, thereby reducing downtime and enhancing operational efficiency.
Splunk is extensively utilized for Security Information and Event Management (SIEM) objectives, aiding organizations in overseeing their IT environments for security threats and irregularities. By correlating data from diverse sources, it can efficiently identify and address security incidents, thereby bolstering the overall cybersecurity stance.
Splunk supports regulatory adherence and oversight by empowering organizations to gather, analyze, and report on data pertinent to regulatory requirements and industry standards. This capability is especially critical for sectors like finance, healthcare, and government, where stringent compliance mandates are in place.
Splunk aids in IT operations and DevOps practices by providing visibility into IT infrastructure, application performance, and deployment processes. This allows organizations to identify areas for optimization, streamline operations, and accelerate the development and delivery of software applications
Splunk equips organizations with machine learning and predictive analytics functionalities, empowering them to uncover patterns, detect anomalies, and forecast outcomes from their data. This supports proactive resolution of issues, capacity planning, and efforts in risk management

Splunk can be utilized to assess customer interactions and feedback from various channels, enabling organizations to delve deeper into customer requirements and preferences. This information can then be utilized to tailor offerings, elevate customer satisfaction levels, and nurture brand loyalty

To sum up, Splunk is an essential tool for organizations to leverage data efficiently, promoting operational excellence, strengthening security measures, ensuring compliance, and achieving business objectives.

Integrating Splunk with ONES:

Steps involved to integrate the Splunk cloud service with ONES,
To integrate the Splunk service with ONES, follow these steps:
By ensuring these details are accurately provided, you can successfully configure and integrate the Splunk service with ONES, enabling seamless metric collection and analysis.
Figure 1: Cloud Instance configuration page in ONES
Figure 2: Instance created and ready for data streaming
The cloud instance created within ONES offers several management options to enhance user experience and sustainability. Users can update the integration settings, pause and resume metric uploads to the cloud, and delete the created integration when needed. These features make it easy for users to maintain and manage their cloud endpoint integrations effectively.
Figure 3 : Updating the integration details
Figure 4: Option to pause and resume the metric streaming to cloud
Figure 5: Option to delete the integration created
The end user has the flexibility to select which metrics from their network monitored by ONES should be uploaded to the designated cloud service. This ONES 2.1 release supports various metrics, including Traffic Statistics, ASIC Capacity, Device Health, and Inventory. Administrators can choose and deselect metrics from the available list within these categories according to their preferences.
Figure 6 : Multiple options available for metric update on cloud
The metric update is not limited to any particular hardware or network operating system (NOS). ONE-Data Lake’s data collection capability extends across various network operating systems, including Cisco NX-OS, Arista AOS, SONiC, and Non-SONiC. Data streaming occurs via the gnmi process on SONiC-supported devices and through SNMP on OS from other vendors.
Figure 7: ONES inventory showing multiple vendor devices streaming

Splunk Analytical capabilities:

Events within Splunk generally contain timestamped data alongside related metadata and content. Each event undergoes parsing and indexing separately, facilitating users to efficiently search, analyze, and visualize data. Splunk automatically extracts fields from events during indexing, streamlining filtering and correlation based on specific criteria.
Figure 8 - Inventory details from NX-OS is captured as events in Splunk
This entails visually depicting data using charts or graphs, aiding users in comprehending patterns, trends, and relationships within the data more readily than analyzing raw data alone. These graphical representations encompass diverse types such as bar charts, line charts, pie charts, scatter plots, and others, each tailored to specific data types and analytical objectives
Figure 9 - Pie Chart in Splunk representing the data from different NOS vendors

Conclusion:

Aviz ONE-Data Lake functions as the cloud-based version of ONES, enabling the storage of network data in cloud repositories. It operates independently of any particular cloud platform and supports data streaming from leading network device manufacturers such as Dell, Mellanox, Arista, and Cisco. Network administrators have the freedom to specify the metrics they want to transfer to the cloud endpoint, granting customized control over the data storage procedure.

Schedule your demo today because with ONE Data Lake integrated with Splunk, you’re not just managing data — you’re revolutionizing network analytics for unparalleled insights and efficiency.
Categories
All

From Hype to Reality: Navigating the Challenges of AI in Network Telemetry

AI is riding the crest of a technological wave, crowned the “Peak of Inflated Expectations” by Gartner’s 2023 Hype Cycle. Platforms like ChatGPT have become more than just buzzwords; they’re blazing a trail into a new era of technological possibilities. This isn’t a fleeting fad; it’s a fuel injection for innovation, poised to transform the landscape across industries including the Networking domain.

Think beyond chatbots and clever tweets.  AI’s true potential lies in its ability to learn, adapt, and create. It can craft personalized experiences, generate realistic synthetic data, and even write code, all while pushing the boundaries of what we thought possible. This isn’t just about hype; it’s about harnessing the power of creativity to revolutionize the way we live, work, and play. So buckle up, because the AI revolution is just getting started. And this blog, let me give some insights into how AI can transform Network telemetry and enhance the experience.

Gartner Hype Cycle - AI

Understanding Network Telemetry and applying AI

What is Network Telemetry?

Network Telemetry is the process of data collection, inspection, normalization and interpreting to generate information that helps the end user to visualize the network state and make decisions.

Beyond simply collecting data, network telemetry transforms it into actionable intelligence. Through meticulous analysis and normalization, it illuminates the network’s current state, enabling informed decisions and proactive interventions. Think of it as the network’s nervous system, providing a constant pulse of information for precise navigation.

Harnessing the Power of AI for Network Telemetry

The convergence of AI and network telemetry represents a significant evolutionary leap in network management. By integrating AI’s analytical prowess with established telemetry infrastructure, we can unlock transformative benefits that enhance network security, optimize resource allocation, and streamline troubleshooting.

Elevating Network Intelligence:

Beyond Hype, Embracing a Paradigm Shift:

The integration of AI into network telemetry isn’t just a technological trend; it’s a strategic imperative. By embracing this transformative technology, organizations can build a future-proof network infrastructure characterized by enhanced security, proactive efficiency, and informed decision-making. This is not a revolution, but an evolution, a seamless integration of AI’s capabilities to empower existing systems and propel network management to new heights.

Reframing the Challenges: Building Robust AI for Network Telemetry

While the promises of AI in network telemetry are vast, navigating its implementation requires careful consideration of several key challenges:

Data-Driven Foundations:

Trust and Transparency:

AI TRISM: Transforming Network Telemetry with Trust, Reliability, and Safety

Applying the AI TRISM framework to network telemetry unlocks a new era of trust, reliability, and safety in our connected world. Trust is bolstered by transparent models that explain how anomalies are detected and prioritized, allowing network administrators to understand and make informed decisions. Reliability soars through AI-powered anomaly detection, automatically pinpointing issues before they snowball into outages, while synthetic data generation ensures robust training even with limited real-world telemetry. Safety takes center stage as AI models learn to differentiate between harmless fluctuations and genuine threats, protecting critical infrastructure from cyberattacks and malicious actors.

Imagine a network humming with the silent symphony of AI. Anomalous blips in traffic flow are instantly flagged, not by rigid thresholds, but by AI models continuously learning the network’s healthy rhythm. Security threats are swiftly identified and neutralized, not through brute force, but by AI’s uncanny ability to discern friend from foe. This is the future of network telemetry, powered by AI TRISM – a future where trust, reliability, and safety weave a protective web around our increasingly interconnected lives.

We, at Aviz, are harnessing the power of AI to make significant improvements in the networking landscape. Expect even more advancements to come from us soon.

Contact us today because with our cutting-edge AI solutions, you’re not just navigating the hype — you’re transforming your network telemetry into a powerhouse of innovation, efficiency, and security.
Categories
All

Pioneering Networking in the AI Era: Collaboration with Aviz and Marvell

At Aviz Networks, we’re dedicated to revolutionizing AI networking solutions by introducing AI for Networks and Networks for AI. Our strategic collaboration with Marvell, a global leader in semiconductor innovation, marks a significant step towards driving the adoption of SONiC. This partnership leverages Marvell’s cutting-edge switch technology and our Open Network Enterprise Suite (ONES) to deliver advanced AI Fabric capabilities and SONiC solutions for edge deployments. By integrating these technologies, we address the increasing demands of AI-driven applications, catering to both core and edge network environments with tailored solutions.

SONiC for AI Fabric and Edge: Distinct Use Cases

Our collaboration with Marvell focuses on two distinct use cases: SONiC for AI fabric and for edge deployments. Each serves a unique purpose in the evolving landscape of AI-driven networking.

SONiC for AI Fabrics

The combination of Marvell’s switch technology and Aviz Networks’ ONES platform provides a robust solution for core network environments. This integrated approach delivers:

SONiC for Edge Deployments

The PENS (PoE Edge Networks with SONiC) workgroup, formed in collaboration with Aviz Networks, the Linux Foundation, Marvell, and others, adapts SONiC for edge LANs. This initiative integrates specialized protocols essential for enhancing connectivity and network efficiency at the enterprise edge. Key benefits include:

Aviz Networks and Marvell: A Best-of-Breed Disaggregated Offering

The collaboration between Marvell and Aviz Networks provides cloud and enterprise data centers with a SONiC-based solution using Marvell switch silicon. Marvell delivers switch silicon with SAI (Switch Abstraction Interface) to integrate seamlessly with open-source SONiC, while we at Aviz Networks offer disaggregated support and services alongside cloud-native applications. Aviz Networks’ ONES (Open Networking Enterprise Suite) stack provides orchestration, telemetry, assurance, and support, ensuring that key network metrics meet SLAs and offering insights into real-time network health.

Successful Deployment and Future Prospects

Together, Marvell and Aviz Networks have successfully deployed hundreds of 400G fabric switches with SONiC running on Marvell silicon in enterprise data centers. These leaf and spine switches, equipped with 100G and 400G ports, deliver industry-leading innovation, a choice of optics, and broad skill-set availability, resulting in significant OPEX and CAPEX savings. We anticipate extending this successful collaboration to many more enterprise and cloud data center operators as the benefits of disaggregated networking become more widely understood and embraced.

Voices from the Partnership

We are excited to collaborate with Aviz Networks to enhance AI-driven networking solutions. Our switch technology, combined with Aviz Networks' expertise in AI fabric management and SONiC for edge deployments, will provide customers with a powerful, scalable, and reliable network infrastructure.

Our partnership with Marvell is a testament to our dedication to advancing AI-driven networking solutions. By combining our innovative ONES platform with Marvell's leading switch technology, we are poised to deliver unparalleled performance and reliability for our customers, both at the core and the edge.

Categories
All

Evaluating Aviz Service Node capabilities using the Spirent Landslide ability

5G PlugFest Summary

Introduction:

Aviz Service Nodes(ASN) enhance network visibility, which operates on general-purpose hardware and offer significant cost savings and optimal performance. ASN provides essential Subscriber intelligence by metadata extraction and correlation for 4G-LTE, 5G-NSA, and 5G-SA networks to achieve comprehensive network analytics.

To validate accuracy and performance along with the KPIs of the ASN product, we utilized the Spirent Landslide to emulate the 5G and 5G-NSA components.

Landslide is a versatile platform designed to test and emulate 5G and O-RAN mobile networks on both traditional and cloud-native infrastructures. It simulates real-world traffic from millions of mobile subscribers to extensively test the 5G core in standalone and non-standalone configurations. 

Master Topology

Features validated by Spirent

Metadata Extraction and Validation:

ASN works on different 5G-SA, 5G-NSA and 4G-LTE interfaces for metadata extraction and achieves the correlation between User and Control Plane. It involves identifying, retrieving, and verifying relevant metadata from various interfaces of Telecom networks, to ensure its accuracy and consistency. Effective metadata extraction and correlation by ASN are crucial for ensuring better data management, analysis, and decision-making.

"The comprehensive metadata extraction and correlation capabilities of ASN are designed to provide unparalleled network insights, ensuring our clients can make informed decisions quickly and accurately."

Handling Handovers:

In 5G networks, handover entails shifting an active call or data session from one cell or base station to another. The ASN manages the smooth extraction and correlation of control and user packets throughout this handover process. The ASN possesses the capability to manage all potential handovers in both 5G and 5G-NSA scenarios.

Application identification:

The primary purposes of identifying applications in telco traffic are traffic management, quality of service (QoS), security, billing, and analytics. ASN’s deep packet inspection identifies applications using the Server Name Indication (SNI), application IP address, destination port and application behavior.

Performance and scalability:

The ASN possesses the capability to manage a maximum of three million subscribers in both 5G and 5G-NSA networks. Additionally, it extracts and correlates control packets for a user plane traffic of up to 150 Gbps. The ASN’s validation includes handling mixed packet sizes of traffic equivalent to real-world scenarios.

Landslide's ability to emulate complex 5G network environments has been instrumental in validating the high performance and scalability of Aviz Service Nodes, ensuring the solutions meet the rigorous demands of modern telecom networks.

For the detailed test report and the performance benchmarking, click here.

Categories
All

Unveiling the Backbone: Exploring ONES Network Latency Measurement Backend

Introduction

In today’s datacenter landscape, network latency holds significant importance due to its impact on the overall performance, efficiency, and reliability of datacenter operations. Here are key aspects that highlight the significance of network latency in data centers. Low network latency is crucial for ensuring optimal performance of applications hosted in data centers. Users expect fast response times when accessing services and applications, and latency directly influences the perceived responsiveness of these systems. Organizations face various challenges in measuring and optimizing network latency, as this task involves complex considerations related to infrastructure, applications, and user experience. Some common challenges include the complexity of the network infrastructure, dynamic workloads and continuous monitoring feeding to the analysis.
This blog introduces you to the backend of ONES network latency measurement component , the core engine responsible for collecting data related to network latency. This component plays a crucial role in providing insights into the performance of a network, helping organizations monitor and optimize their infrastructure. It supports various network protocols, including ICMP (Internet Control Message Protocol), and TCP (Transmission Control Protocol), depending on the need.

Let’s explore the key aspects and functionalities of the backend.

Core Features

The NWSLA measurement component provides an agent that runs in the SONiC switches as well as servers. This agent exposes an API to the ONES collector eco-system and allows for triggering the latency measurements. Latency to a Destination IP can be measured using either ICMP or TCP. The calculation involves the following parameters
The above diagram explains the same. ONES Collectors controls & facilitates the probes, allowing latency measurements to be performed by the whole ecosystem of endpoints. One of the important features of this agent is its ability to allow the calculations to be calculated periodically. For instance an operator wishes to calculate the latency between point A to point B every 5 minutes. This aids the operator in the following cases

To cater to such requirements, ONES allows the operator to schedule the calculation of the latency periodically over specified time intervals. This allows the operators to understand the performance of the networks proactively.

NanoSecond Level Precision

One of the unique features of the ONES infrastructure that calculates this latency is its ability to calculate the latency in terms of nanoseconds. Calculating the latency in terms of nanoseconds offers some unique advantages, in scenarios where extremely precise and rapid measurements are essential perfect for the datacenter networks. It allows
In summary, calculating latency in nanoseconds offers advantages in situations where speed, precision, and real-time responsiveness are paramount.

Decoding Nanosecond Latency Calculation

Precision in nanosecond latency calculation is a sophisticated endeavor. ONES adopts an innovative approach by modeling the network analogous to an optical channel, ensuring a high degree of precision in latency calculation. The methodology involves sending bursts of packets and deriving latency measurements seamlessly, deviating from the conventional approach of correlating request and response times. This eliminates the need to calculate latency per request before initiating subsequent probes. The flexibility of ONES allows for the configuration of parameters to align with specific network requirements.

Scalability & Robustness

The ONES ecosystem excels in delivering exceptional scalability and robustness. The philosophy of ONES is seamlessly reflected in the design of its latency measurement, ensuring scalability and robustness are prioritized. This commitment is affirmed through various validations, including measuring latency under load, the seamless addition of new nodes for calculations, the capacity to handle and sustain a significant number of probes by the agent, built-in fault tolerance features, optimized resource utilization, and consistent operational and longevity behavior.

Use-Cases: Ping-pong Mesh

One of the simple use-case scenarios will be to trigger the measurement across the endpoints of the network. This initiates the latency test from the end points attached to the network ensuring that the packets used to measure latency traverse the network to reach the other endpoint. This will be calculated proactively at a set defined interval allowing to check on the latency periodically. Identifying latency bottlenecks helps optimize resource allocation and maintain high-quality services
Under such use-cases, the measurement of latency plays a vital role in optimizing overall network performance. Low-latency communication directly enhances user experience, aids in capacity planning, facilitates proactive issue resolution, and furnishes valuable data for making informed decisions about network infrastructure.These scenarios can be expanded, including the exploration of latency between the most distant leaf nodes, and so on.

ONES Network SLA - Future Looking

Subsequent releases of ONES provide robust support with advanced features built upon this foundational base. Initially, integration with the ONES UI will be seamless, offering comprehensive cloud integration support. Additionally, support for path tracing and availability metrics will be extended across the system.

In conclusion, the backbone of a network latency measurement tool functions as the core engine responsible for gathering, processing, and evaluating data to gauge the vitality and efficiency of a network. It stands as a pivotal element for organizations aiming to sustain ideal network latency, guaranteeing a smooth and responsive user experience.

Have comments or feedback? Please feel free to get in touch with us
For experiencing SONiC, Please try ONES Center  https://aviznetworks.com/one-center
For detailed case study of SONiC, please refer here

Categories
All

Network Copilot – Exploring the Capabilities and Utilities of a Gen AI Assistant

AI utilized in network management involves the deployment of artificial intelligence methodologies like machine learning, deep learning, and natural language processing. Its objective is to enhance the effectiveness, dependability, and security of any network.
Aviz Network Copilot 1.0 serves as an AI-driven network analysis tool designed to assist network operators in identifying performance bottlenecks and resource utilization challenges within their networks. By leveraging natural language prompts, operators can easily interact with the tool to gain insights into network performance metrics and effectively address any issues that may arise. This intuitive approach enhances the efficiency of network monitoring and troubleshooting, enabling operators to maintain optimal network performance and reliability.

Capabilities:

Data ingestion and Storage:

It entails the procedures associated with gathering and housing data from SNMP and ONES Collector, which includes metric data collection from EOS, SONiC, Cumulus and NXOS. This encompasses acquiring, transforming, and preserving data in an organized format to facilitate subsequent analysis or utilization. Standardized data collection makes Network Copilot ready for multi-vendor on-prem deployments.
Figure-1 : Device inventory

Chat Prompt:

Users will receive a sample question to start a conversation in a chat-based interaction. This question prompts participants to respond, steering the conversation. Additionally, the chat history can be exported using this prompt.

The end user can encounter a continuous stream of data or information as a response to the question via streaming.

Figure-2 : Network Copilot homepage
Figure-3 : Network Copilot Chat window

Import Context:

In the model, network compliance is defined based on user preferences. Threshold values are adjustable to meet individual customer needs. The context can be imported and controlled through the RAG in chromaDB.
Figure-4 : Network Compliance modification
The model offers a content template for upload. Users can download it, make edits as needed, save the changes, and then upload the modified file.

Multi-Language support:

Aviz Network Copilot 1.0 will facilitate user interaction with the model in both English and Japanese languages initially. However, the capability to support additional languages can be improved based on specific requirements as needed.
Figure-5 : Conversation in English
Figure-6 : Conversation in Japanese

Analytics

Graphical representation of data using Pie Charts, Bar Charts, and Timeline Graphs to convey insights, trends, and patterns more effectively. The model also facilitates summarizing, describing, and analyzing the gathered data, while also performing computations to determine averages, counts, percentages, and more. This aids in achieving greater clarity and comprehension of your network.

Figure-7 : Pie Chart representation of inventory
Figure-8 : Line Chart representing network utilization over time
Figure-9 : Example shows average cpu utilization for past 3 months

Security:

Network Copilot - Use case:

Inventory and Accounting:

Capturing comprehensive network device information, including hostname, HWSKU, operating system version, interface details, its overall capacity and also the device uptime. This helps administrators to maintain inventory records and account for network assets effectively.
Figure-10: Snapshot of model responding the device details from inventory

Capacity Planning:

This simplifies the administrator’s task by forecasting network capacity details through a comparison of available bandwidth with utilized bandwidth. It also assists network operators in designing the infrastructure required to support current and future network demands effectively.
Figure-11 : Model projecting the overall capacity and utilization of past

Anomaly Detection:

Network Copilot 1.0 is additionally trained to identify network failures resulting from sudden increases in traffic. Such spikes in CPU and memory utilization can lead to failures in various components, including control planes or links within the network infrastructure. By recognizing these patterns, Network Copilot 1.0 can help mitigate potential disruptions and proactively address issues before they escalate, thereby enhancing network stability and performance.
Figure-12 : Network Copilot detecting the peak usage of traffic
Figure-13 : Model responded with HWSKU details which has peak network usage on last 3 months

Network Compliance:

The model comes pre-configured with default compliance thresholds that establish limits for all relevant metrics captured. Network Copilot is equipped to assess whether CPU, memory, bandwidth utilization, and network packet drops comply with these thresholds by comparing observed values with the predefined limits. These thresholds are customizable by users, allowing them to be adjusted as needed.
Figure-14 : Compliance supported on Network Copilot

Conclusion:

Aviz Network Copilot 1.0 leverages cutting-edge AI capabilities facilitated by large language models. Rather than relying on conventional methods for network access, users can engage with Network Copilot to assess various aspects of their network. This includes planning network capacity, identifying instances of network failure, and verifying compliance with predefined configurations and standards. Network Copilot offers an intuitive and efficient alternative to traditional network management approaches, empowering users to gain insights and make informed decisions regarding their network infrastructure.

Unlock the Network Copilot 1.0 experience—schedule a demo on your preferred date, and let us show you how it’s done!

Categories
All

Debunking 5 Common Myths About SONiC

In the ever-evolving world of networking, SONiC NOS (Software for Open Networking in the Cloud) has emerged as a game-changer for data center and edge networks. However, like any technology, SONiC has not been immune to myths and misconceptions. When we talk to users evaluating SONiC for their network, we often hear… “SONiC is not ready for us, at least just yet”. In this blog, we’ll debunk five common myths about SONiC.

1. SONiC is not ready for the Enterprise

Myth:  One of the most common things we hear is that SONiC does not have features such as EVPN, or MC-LAG, and users are not comfortable about its quality standards. It is also difficult to integrate with our existing NetOps tools.

Reality: The table below gives a good high-level overview of what is already available for use cases across various network architectures. This list is evolving very fast, and if you follow the SONiC community then with-in next few months this list will increase in use cases, as well as in vendors who support those use cases.

Use Case Essential Protocols Supported By
IP CLOS  BGP Unnumbered Most vendors
AI Fabric RoCE Several vendors
IP CLOS w/EVPN EVPN T1 to T6 Several vendors
Edge & Campus MLAG, 802.1x, STP, POE Several vendors

To see evidence of whether or not SONiC has the support for the above capabilities, and try them out on any hardware sourced from ANY of the white-box vendors, you can schedule a demo in our ONE Center. We will not only walk you through each of the available options, but also show how SONiC can be easily integrated within your existing NetOps environment. 

At Aviz Networks we help accelerate SONiC Community feature development, and we work with customers to develop the features they need and give them back to the community. Essentially, Aviz is bringing a paradigm-shifting new concept to market called “Roadmap By-pass”, where customers get the features they need, when they need it,  in a more agile fashion instead of a single vendor prioritizing their own desires or for the “highest bidder”.

2. Deploying Community SONiC requires an army of Network Engineers

Myth:  One of the most persistent myths about SONiC is that it is exclusively designed for hyperscalers who have access to a large pool of network engineering expertise in-house. While the NOS offers powerful networking capabilities, it’s not a simple-to-deploy solution for all environments.

Reality: Yes, SONiC does require a good understanding of networking principles, SDN concepts, and the knowledge of hardware compatibility, but the fact is a vast majority of large to medium enterprises have already deployed SONiC using their existing teams with an added bonus of a little external help. Companies like eBay, have benefited immensely both in terms of cost savings and the flexibility of innovation by deploying SONiC on white-box switches. The market leadership brief by Futuriom lists down how Networking users can accomplish similar feat. According to publicly accessible information on the SONiC community page, numerous enterprises including Comcast, eBay, Target, and Bloomberg have either deployed SONiC or are currently in the process of doing so.

At Aviz, we have been helping enterprises of all sizes with purpose-built automation tools and data normalization solutions that make the SONiC deployment and operations quick and smooth. 

You can discover the magic behind eBay and some of our other rapidly expanding SONiC deployments by checking out our ‘Open Networking Enterprise Suite (ONES)’ and requesting a demo. 

ONES is not only helping large enterprises with their SONiC deployments, but dozens of small to medium enterprises in their own migration to SONiC. So, in reality, you don’t  need an army of in-house engineers with PHDs, so long as you leverage the right resources and partners like Aviz who is dedicated to your success and SONiC’s future.

Recently we went one step further to put AI to use for easier migration towards SONiC. Our latest solution Network Copilot™ is designed to generate templates for easy transition.

3. Community SONiC POCs are challenging, and procurement process is a mess

Myth:  A lot of people assume that conducting Proof of Concepts (POCs) with SONiC can be a challenging endeavor, often compounded by a complex and cumbersome procurement process. The versatility SONiC offers for a diverse array of supported hardware and configurations, make it challenging to select the right components for your use cases. Additionally, navigating the procurement process, which involves evaluating multiple vendors, and negotiating contracts, can be overwhelming and time-consuming.

Reality: SONiC’s open-source nature allows organizations to break free from vendor lock-in and choose best-of-breed components from various hardware vendors, tailoring their network to specific needs. Conducting a multi-vendor POC with SONiC can appear to be a complex undertaking, especially when you first start to think about what it takes for a successful deployment:

    • Selecting hardware vendors

    • Testing for performance and feature requirements

    • Scalability per your specific use cases

    • Ensuring interoperability with your existing ecosystem of infrastructure and tools

Thankfully, you don’t need to reinvent the wheel or spend time and money doing it. Aviz has done all the heavy lifting for you by partnering with nearly all major hardware vendors, to create an environment where any organization can conduct POCs across a wide range of options without having to invest a single dollar in purchasing hardware for POCs. 

The Aviz ONE Center is a perfect solution for low-effort, low-cost SONiC POCs with online and in-person access to try out SONiC capabilities on any hardware of your choice. Also, Aviz has created tools (also available in ONE Center) that can perform hundreds of tests for performance and scalability pertinent to your specific use cases. 

Additionally, today, many System Integrators (SIs) provide end-to-end solutions for SONiC, which means, you do not need to buy hardware, software, and support separately from multiple vendors.

In fact, many of our own SI partners have simplified the procurement process so much that they can generate a single BOM (bill of materials) across 10+ vendors and ensure that your procurement teams don’t have to interface with them individually.

4. Community SONiC is not secure because it’s open-source

Myth:  Many claim that SONiC is inherently insecure simply because it is open-source and exposes the source code to potential attackers. 

Reality: Exactly the opposite. Practically every Security Professional Aviz has encountered lobbies for Open Source because of the inherent review the software is given, and thus Open Source itself does not equate to insecurity. The security of open-source software depends on various factors, just like any other software implementation. While Open Source software by nature exposes the source code to everyone, which includes potential attackers, it also invites a large pool of experts to discover and address security vulnerabilities. The transparency of open-source projects often means vulnerabilities are identified and fixed faster than in proprietary software. 

The security of any software deployment or implementation, including SONiC, primarily depends on how well it is configured, maintained, and monitored. A vast community of SONiC developers actively scrutinizes and contributes to the codebase, ensuring that any security concerns are swiftly addressed, making SONiC more resilient to vulnerabilities, and substantially reduces the time to review and approve its use for a wide range of applications and services.

5. Open Source SONiC is just a short-term trend

Myth:  Some suggest that SONiC is merely a short-term trend and will fade away in the near future. Several open-source and commercial Open NOS solutions existed before SONiC, gained prominence, but eventually got consumed into proprietary networking ecosystems.

Reality: The consistent growth in the adoption of SONiC across large, medium, and small enterprises, combined with support and stewardship by major players like The Linux Foundation (LF), Open Compute Project (OCP), Microsoft, and the vast majority of networking hardware vendors, is a testament to its staying power. 

SONiC’s modular and open-source approach to networking addresses many of the pain points organizations have been facing for decades, making it not just a trend, but a significant shift in the networking paradigm. As organizations seek more control over their network infrastructure and adapt to evolving cloud-based architectures, SONiC offers a robust solution to both challenges.

Additionally, SONiC has already garnered unparalleled support from the vast majority of networking vendors. This is the first time in history, that an Open-source NOS has been whole-heartedly embraced by networking giants like AristaBroadcom, Cisco, Juniper, MarvellNVIDIA, along with numerous white-boxvendors such as Celestica, Edgecore, Quanta, Super Micro, and growing every day.

In their recent 2023 report Gartner explicitly stated “Open networking has been replaced on the Hype Cycle with SONiC, which garners the most client interest of any open networking technology.” 

A 5 Point Analysis on why Gartner Preferred SONiC Over Open Networking in their 2023 Hype Cycle is a great read that clearly  explains this ongoing trend. 

While technology trends may come and go, SONiC’s foundations align with the enduring demand for adaptable, cost-effective, and scalable networking solutions, suggesting that this is more than just a short-term trend in the networking landscape.

Conclusion

SONiC’s ascent in the networking sphere is undoubtedly well-deserved, poised to transform organizational networking strategies. By dispelling prevalent myths, we aim to offer clarity and inspire more entities to explore the myriad benefits SONiC offers. While due diligence is essential in adopting any technology, SONiC presents compelling options for next-generation networks. Understanding its true potential empowers organizations to make informed decisions about integrating SONiC into their network infrastructures.

Reach out to reserve your spot at our ONE Center for a proof of concept, where various vendor solutions, including Cisco SONiC, NVIDIA SONiC, Celestica SONiC, Marvell SONiC, Wistron SONiC, Edgecore Community SONiC, Supermicro SONiC, Enterprise SONiC etc, can be thoroughly explored and tested, ushering in a new era of networking innovation. As a common knowledge, these vendors are helping the community innovators such as Microsoft for Azure SONiC deployments as well hence bringing the same SONiC for other deployments.

Categories
OPB

Syncing Success: Elevating Network Monitoring with Time-Synced Excellence in the SONiC Landscape

In the dynamic landscape of network monitoring, Time Synchronization emerges as a pivotal force, particularly in industries where precise packet timing is paramount. This is evident in time-sensitive applications like algorithmic trading platforms, emergency response systems, and Telco network monitoring, where split-second decisions are imperative. It forms the bedrock for achieving optimal Quality of Service (QoS), fault detection-diagnosis, and security threat detection. From enhancing call detail record analysis to synchronizing subscriber experience monitoring, Time Synchronization emerges as the unsung hero, orchestrating precision and efficiency in the symphony of network operations.

Why do we need packet timestamping?

Precise timestamps help pinpoint delays, identify network bottlenecks, optimize routing, and ensure adherence to service-level agreements.

  1. 1.

    Detecting the congestion point on the path of a flow:
    Monitor packet delays at various points along the path by analyzing the corresponding packet timestamps. This helps in Jitter, Throughput analysis and packet loss detection as well.

  2. 2.

    Path Tracing:
    By examining timestamps at different network devices, administrators can trace the path of a flow and pinpoint specific devices or links where congestion is likely occurring.

  3. 3.

    Arrival sequence validation:
    Arrival sequence validation helps confirm that packets are reaching their destination in the correct order. Also helps in achieving the protocol compliance, avoid data corruption and reliability

  4. 4.

    Security incident investigation:
    In cybersecurity, timestamps are essential for investigating security incidents. Analyzing the timing of events helps in understanding the sequence of actions during an incident

  5. 5.

    Troubleshoot and debugging the network delays :
    Timestamps facilitate the correlation of events across different network devices, aiding in troubleshooting and debugging by establishing a chronological order of occurrences

  6. 6.

    Dynamic Path Adjustments:
    Implement dynamic path adjustments to reroute traffic away from congested paths. This adaptive approach helps in mitigating congestion dynamically.

By employing a packet timestamping feature, network administrators can effectively detect congestion points, network delays, threat issues allowing for proactive management and optimization of network performance. Regular monitoring and analysis are essential for maintaining a resilient and efficient network.

How are we enabling Network Administrators?

Open Packet Broker (OPB) is the industry’s first software-based containerized Network Packet Broker (NPB) application built on top of the open-source SONIC NOS to enable monitoring and security tools to access the network traffic. OPBNOS stands out with its support for packet timestamping. Leveraging modern ASIC capabilities, it allows users to configure timestamps per port or flow, providing unparalleled precision. Packet timestamps can be added at ingress/egress at every port. Achieving precise time synchronization in network packet broking can be accomplished through two essential methods.

  1. 1.

    Timestamping the packetsintercepted by the network packet broker devices is a fundamental approach. This involves assigning a precise time reference to each packet, allowing for accurate sequencing and analysis.

  2. 2.

    Synchronizing the network packet brokerswith the network time. This synchronization can be achieved through widely used protocols such as Network Time Protocol (NTP) or high-precision Precision Time Protocol (PTP).

Network operators would like to insert timestamp to all the packets ingressing from network ports and egressing out to tool ports.

Fig : Deployment representation of Time-Synchronized OPB Network

opbnos# conf t
opbnos(config)# timestamping enable | disable
opbnos# conf t
opbnos(config)# interface ethernet Ethernet1/1
opbnos(config-if)# timestamp enable stage ingress source-id NE1Eth1
opbnos# conf t
opbnos(config)# interface ethernet Ethernet2/1
opbnos(config-if)# timestamp enable stage egress source-id NE1Eth2

Fig : TimeStamp Configuration at Interface level in OPBNOS

OPBNOS also offers the packet timestamp decoder which helps in analyzing the packet capture dump and decode the timestamp info for the customers. Also, it is use-case driven where the analyzer can be extended to serve any specific use-cases post decoding in the future.

test@aviz ~ % python3 timestamp_decoder.py
Timestamp Data : 0xebb8a66c01a05bd592ba00f577980000000001a05bd59584005bbbdd Source-1 : Seconds 1665 and Nanoseconds 466981562 and origin id : 0x7abbcc Source-2 : Seconds 1665 and Nanoseconds 466982276 and origin id : 0x2dddee Time Difference : 0 Seconds and 714 Nanoseconds

Fig : TimeStamp Decoder to verify/test the time difference in Network.

Conclusion

In conclusion, the synergy of packet timestamping is the bedrock of the modern network monitoring world. Packet timestamping, with its precision, lends a temporal dimension to data, enabling meticulous analysis, troubleshooting, and compliance. When integrated seamlessly into any network monitoring using Open Packet Broker (OPB based on SONIC NOS) , this timestamp feature becomes invaluable, orchestrating the symphony of network operations.

Time is not just a metric; it’s the heartbeat of network resilience and innovation.

 

FAQs

For any further queries or more information, please don’t hesitate to contact us.

Syncing Success: Elevating Network Monitoring with Time-Synced Excellence in the SONiC Landscape

In the dynamic landscape of network monitoring, Time Synchronization emerges as a pivotal force, particularly in industries where precise packet timing is paramount. This is evident in time-sensitive applications like algorithmic trading platforms, emergency response systems, and Telco network monitoring, where split-second decisions are imperative. It forms the bedrock for achieving optimal Quality of Service […]