top of page
Writer's pictureThe Oldcommguy

5 Reasons Traditional Packet Capture will not work anymore!


Traditionally network administrators have learned to use packet analysis for troubleshooting complex network problems. Some invested in expensive traffic capture appliances to capture and retain the traffic for later analysis; in case they get reports of a performance degradation issue, they can extract trace files and load them into a network analyzer (Wireshark or commercial) and look at the packets to try and determine the issues that caused the reported degradation.

The evolution of IT systems is challenging this way of undertaking network diagnostics.

Traffic data focused analysis will remain the main source of troubleshooting network issues, but network teams need to take another perspective on it.

What are these trends that make it more complex for network administrators to diagnose performance issues fast enough with stream to disk appliances and network analyzers ?

WHAT ARE THE 5 CHALLENGES?

The top 5 Challenges to successful old school Packet Analysis!

1- OVERALL SYSTEM COMPLEXITY

  • IT systems are getting more and more redundant, there are multiple paths for data streams.

  • The number of applications is now in hundreds or thousands.

  • The complexity of each application chain is such that defining where to tap traffic is in itself a challenge.

  • Traffic volumes are increasing very fast. We used to measure actual usage in hundreds of Mbps. We are now talking about over 10Gbps usage in corporate networks.

When we consider trace files of 100MB to be loaded into a software sniffer, this corresponds to 80ms of a 10Gbps traffic. Can you really make a diagnostic on such a short time extract?

2- HISTORY: THROUGHPUT AND DATA RETENTION

Stream to disk appliances work on the basis of storing the production traffic for later analysis. There is a direct relationship between the throughput to be analyzed, the storage available and the retention time of data. As an example, for a 500Mbps average traffic, if you need to keep 7 days of history of that traffic, you will need an appliance with approx. 12TB of storage. Running on a 10Gbps fully loaded link, your history will be no more than 1 hour and a half. There are several challenges raised by the lack of sufficient time retention:

  • Events are not reported in real time and they are often intermittent: if you do not get the chance to look at them during the short retention windows, you cannot diagnose it.

  • Drawing conclusions on response times without being in a position to compare to a baseline where the application is running properly does not allow to understand if there is a degradation and where it comes from.

3- "IT’S NOT THE NETWORK" ISN’T ENOUGH OF AN ANSWER ANYMORE

The number of applications which are mission critical for the business has grown in proportion. A performance degradation has business consequences like never before. Having tools which are sufficient to show that the network performance is fine and is not the cause of the degradation is not sufficient anymore.

Only solutions which can help locate the root cause of the degradation end-to-end (wherever it comes from) actually bring enough value now.

4- CHANGE IN THE DATA CENTER TOPOLOGY

  • Data centers have changed radically with the need for redundancy and resilience: network flows can use different paths to be distributed. Most applications include load balancing and HA mechanism and it is hard to make sure you capture the right traffic without having a very wide (and voluminous) traffic capture.

  • New technologies have brought new challenges:

  • Virtualizations now broadly used. It also means that most of the key traffic may not go through any physical cable or switch at any point of time.

  • The dynamism (e.g. Motion) and the automation of deployments create any additional challenge for legacy NPM solutions based on capturing traffic on a limited set of physical network segments.

5- HUMAN RESOURCE AND EXPERTISE

IT resources have not grown so that there is not more time and expertise available to look at trace files… Manual handling of packet data cannot be taken at the scale of the new Data-centers. Here is the challenge! What can be the new approach which brings a solution and enables network teams to maintain their ability to troubleshoot and monitor efficiently the performance of applications on their infrastructure?

TAKE A NEW APPROACH TO NETWORK TRAFFIC ANALYSIS

To make the most of network traffic to troubleshoot performance degradation's and monitor end user response times, you need to take a new approach at how you analyse and gather information from your network traffic.

We have summarized our vision of how you can overcome these challenges in a short guide; download it now!

The 6 challenges -https://accedian.com/enterprises/blog/packet-capture-6-reasons-new-approach/

Author - Boris Rogier is Managing Director and co-founder of PerformanceVision now Accedian , with 15 years of experience in both telecom and performance monitoring industries. Boris is responsible for the overall operations of PerformanceVision and contributes to the development of solutions designed to help their customers troubleshoot performance degradations faster, avoid incidents by proactive monitoring and take better decisions in infrastructure and application delivery. PerformanceVision's revolutionary technology extracts detailed performance analytics from the network traffic and provides a 360° visibility from the network layer to the application transactions, on both physical and cloud/virtual networks in real time.

For more information, please visit: https://accedian.com .

370 views

Recent Posts

See All
bottom of page