Would you like to visit the German version of our website? X

5 Key Benefits of Using Cisco TRex Traffic Generator for Network Load Testing

Cisco-TRex-Traffic-Generator


As we were designing yet another router, we used a very handy tool to test the network’s performance – the Cisco TRex traffic generator. So what kind of tool is it? How does one use it? And how can it help network developers and engineers? Let’s tackle these questions one at a time.

Unlike traditional commercial traffic generators, which are typically used for testing network infrastructure devices, TRex represents an evolution in testing methodologies. It provides more sophisticated, stateful traffic generation capabilities that can simulate realistic application traffic patterns, surpassing the capabilities of basic commercial tools. Additionally, TRex integrates a multi rx software model, enabling it to handle dynamic filters while maintaining high traffic rates, which is essential for optimizing performance in environments with numerous virtual interfaces and routing configurations.

 

1. What is Cisco TRex Traffic Generator?

It is an open-source traffic generator, which works on standard Intel DPDK-based processors and supports both stateful and stateless modes. It is a relatively simple to use piece of software and is fully scalable.

The documentation for it can be found here.

TRex allows developers to generate different types of traffic and analyze the data that is received as a result. The data analysis is performed on the MAC and IP levels. TRex users can set packet size and quantity, as well as control the speed of the data transfer. A key feature of TRex is its accurate TCP implementation ability, which is significant in generating Layer 7 data and enabling high performance and scalability in network testing.

The traffic generator works in a Linux environment.

A key aspect that sets the TRex generator apart is its utilization of DPDK technology, which allows for the bypass of performance bottlenecks in the Linux network stack. DPDK, or Data Plane Development Kit, is a whole set of libraries and drivers that are used for fast packet processing by allowing the system to exclude the Linux network stack from the packet processing algorithms so that the tool can interact directly with the network device that’s being tested.

DPDK transforms a general-purpose processor into a packet-forwarding server. This transformation eliminates the need for expensive switches and routers. However, DPDK does have some hardware limitations and the list of supported network adapters can be found here. The most most popular platform here is from Intel, i.e. it supports hardware that works with Linux drivers e1000, ixgbe, i40e, ice, fm10k, ipn3ke, ifc, and igc.

TRex is also used for testing network infrastructure devices, which are becoming increasingly complex and require stateful traffic generators to simulate realistic application traffic patterns for accurate performance measurement.

Another important nuance to consider is that in order for a TRex server to work with speeds of 10Gb/s and above, it requires the use of a multi-core processor, meaning at least 4 cores, and preferably an Intel CPU with hyper-threading support.

2. How to get and try out TRex

1) Download the archive from the trex-tgn.cisco.com server: trex-tgn.cisco.com/trex/release/

Unpack the archive in the user’s home directory “/home/user”, where “user” is replaced by the user name.

[bash]>wget --no-cache https://trex-tgn.cisco.com/trex/release/latest

[bash]>tar -xzvf latest

Unpack the archive in the user’s home directory “/home/user”, where “user” is replaced by the user name.

 

2) Configure the interfaces to send and receive data

Configure the interfaces by using the utility “dpdk_setup_ports.py”, which is included in the TRex package. The network interfaces used by TRex can be configured at the MAC or IP level. To launch it, run the utility with the interactive configuration key “sudo ./dpdk_setup_ports.py -i”.

The first step is to drop the MAC-based configuration (Do you want to use MAC-based config? (y/n) n).

The second step is to select a pair of network interfaces. In our example, the Intel X710 network card supports 4 network interfaces, and we will use the 1st and 4th network card ports.

In the third step, the system will give the option to automatically create a closed configuration, which is when data leaves port 1 and goes to port 2 (and back), all on the same PC. We chose to abandon this scheme and configure a routing scheme for 2 PCs.

The fourth and fifth step are save the configuration to the file “/etc/trex_cfg.yaml” and confirm.

As an example, let’s take a look at the configuration at the IP level for the following connection scheme:

The configuration file is located here: “/etc/trex_cfg.yaml”. An example of a simple configuration file is shown below, using a network card with 2 ports and a CPU supporting 8 threads:

 

Config file generated by dpdk_setup_ports.py

The config file generated by dpdk_setup_ports.py is a YAML file that contains the configuration settings for the TRex server. This file is crucial as it defines the network interfaces, port information, and platform settings, ensuring that the TRex server operates correctly and efficiently.

Here is an example of a config file generated by dpdk_setup_ports.py:
 

Config file generated by dpdk_setup_ports.py

version: 2 interfaces: ['01:00.0', '01:00.3'] port_info:

  • ip: 192.168.253.106 default_gw: 192.168.253.107
  • ip: 192.168.254.106 default_gw: 192.168.254.107 platform: master_thread_id: 0 latency_thread_id: 1 dual_if:
    • socket: 0 threads: [2,3,4,5,6,7] In this configuration:
  • interfaces lists the network interfaces used by the TRex server, identified by their PCI addresses.
  • port_info specifies the IP addresses and default gateways for each interface, ensuring proper routing of network traffic.
  • platform settings define the master thread ID, latency thread ID, and dual interface settings, optimizing the server’s performance.

By correctly configuring these settings, you ensure that the TRex server can handle high-speed network traffic and provide accurate testing results.
 

Config file generated by dpdk_setup_ports.py ###\

  • version: 2 interfaces: [‘01:00.0’, ‘01:00.3’] port_info:\
  • ip: 192.168.253.106 default_gw: 192.168.253.107\
  • ip: 192.168.254.106 default_gw: 192.168.254.107 platform: master_thread_id: 0 latency_thread_id: 1 dual_if:\
  • socket: 0 threads: [2,3,4,5,6,7]

In the configuration:

  • ‘01:00.0’, ‘01:00.3’ are the names of the Ethernet interfaces in the Linux system used.
  • ip: 192.168.253.106 is the address of the Server TRex PC port from which the traffic is generated.
  • default_gw: 192.168.253.107 is the address of port 1 of the PC DUT (Device under test).
  • ip: 192.168.254.106 is the address of Server TRex PC port from which traffic is returned after going through QOS rules.
  • default_gw: 192.168.253.107 is the address of 2nd port of the DUT.

Warning! The TRex system prohibits using the same subnet when generating traffic flows. For this purpose the subnets 16.0.0.0 and 48.0.0.0 should be used when generating packets.
 

3) Configuring the interfaces on a remote machine

Now we have to set up the forwarding and routes so the DUT that we are routing traffic through knows where to send and receive the packets.

Let’s configure routing rules on the DUT:

sudo echo 1 > /proc/sys/net/ipv4/ip_forward

sudo route add -net 16.0.0.0 netmask 255.0.0.0 gw 192.168.253.106

sudo route add -net 48.0.0.0 netmask 255.0.0.0 gw 192.168.254.106
 

4) Launch the TRex server in astf mode

cd v2.XX

sudo ./t-rex-64 -i –astf

If the TRex server is launched successfully, the system will display that the Ethernet ports that will be used for testing are bound:

The ports are bound/configured.

port : 0

———— link : link : Link Up - speed 10000 Mbps - full-duplex promiscuous : 0 port : 1 ———— link : link : Link Up - speed 10000 Mbps - full-duplex> promiscuous : 0 number of ports : 2 max cores for 2 ports : 1 tx queues per port : 3

For detailed analysis and monitoring, you can wireshark capture network traffic to examine the data flow and performance.
 

5) Launch the TRex console

With the help of the console, which should be open in a separate window, start the generation of the traffic flow using the pre-made examples (the folder with the examples can also be found in the TRex archive):

cd v2.XX ./trex-console start -f astf/http_simple.py -m 1

start (options): -a (all ports) -port 1 2 3 (ports 1 2 3)> -d duration (-d 100 -d 10m -d 1h) -m stream strength (-m 1 -m 1gb -m 40%) -f load from disk the streams file

After a successful launch, the traffic statistics will be displayed in the TRex server console:

Global stats enabled Cpu Utilization : 0.3 % 0.6 Gb/core Platform_factor : 1.0 Total-Tx : 759.81 Kbps Total-Rx : 759.81 Kbps Total-PPS : 82.81 pps Total-CPS : 2.69 cps

Expected-PPS : 0.00 pps Expected-CPS : 0.00 cps Expected-L7-BPS : 0.00 bps

Active-flows : 2 Clients : 0 Socket-util : 0.0000 % Open-flows : 641
 

3. TRex Traffic Generation Capabilities

TRex is a powerful traffic generator that offers a wide range of capabilities to simulate various types of network traffic. With TRex, users can generate traffic at multiple layers, including L2, L3, L4, and L7. This allows for the simulation of various network protocols, such as TCP, UDP, ICMP, and HTTP.

One of the standout features of TRex is its ability to support multiple streams. This means you can simulate multiple traffic flows simultaneously, which is particularly useful for testing network devices and infrastructure under realistic traffic conditions. Whether you’re dealing with a single stream or a complex mix of traffic, TRex can handle it. TRex also includes stream interactive support, enabling dynamic management and monitoring of multiple data streams. This support facilitates real-time interactions and automated functionalities, allowing users to define various packet handling profiles and obtain detailed statistics related to latency, jitter, and traffic flow for each stream.

In addition to supporting multiple streams, TRex also provides advanced features like dynamic filters. These filters allow you to filter traffic based on various criteria, such as source and destination IP addresses, ports, and protocols. This is incredibly useful for testing network devices and infrastructure under specific traffic conditions, ensuring that your network can handle the exact types of traffic it will encounter in the real world.

TRex also offers a comprehensive set of APIs for automating traffic generation and analysis. This allows users to integrate TRex with other tools and systems, such as Wireshark, to capture and analyze network traffic. By leveraging these APIs, you can create automated testing workflows that save time and increase the accuracy of your tests.
 

4. TRex Traffic Modes

TRex supports two primary traffic modes: stateless and stateful. Each mode has its own strengths and is suitable for different types of testing scenarios.

In stateless mode, TRex generates high-performance traffic without reacting to incoming traffic. This mode is ideal for testing network devices and infrastructure under high-traffic conditions, where the focus is on throughput and performance. Stateless mode is perfect for stress-testing your network to ensure it can handle large volumes of traffic without degradation.

On the other hand, stateful mode allows TRex to react to incoming traffic, making it suitable for testing network devices and infrastructure under more realistic traffic conditions. While stateful mode may have limited performance compared to stateless mode, it provides a more accurate simulation of real-world network traffic, where devices need to respond to incoming requests and maintain state information.

TRex also supports various traffic continuity modes, including continuous and limited traffic. Continuous traffic is useful for testing network devices and infrastructure under sustained traffic conditions, ensuring that your network can handle long-term loads without issues. Limited traffic, on the other hand, is useful for testing network devices and infrastructure under bursty traffic conditions, where traffic patterns are more sporadic and unpredictable.
 

5. TRex Server and Deployment

The TRex server is a powerful tool for generating network traffic and testing network infrastructure functionality. It can be deployed in various environments, including virtual machines, containers, and bare-metal servers, making it versatile and adaptable to different testing needs.

To deploy the TRex server, follow these steps:

  1. Download the TRex software from the official TRex website.
  2. Install the TRex software on a server or virtual machine. Ensure that the server meets the hardware requirements, such as a multi-core processor and supported network adapters.
  3. Configure the network interfaces and port information using the dpdk_setup_ports.py utility. This step involves setting up the network interfaces at the MAC or IP level and saving the configuration to a file.
  4. Start the TRex server using the t-rex-64 command. For example, to start the server in ASTF mode, use:cd v2.XX sudo ./t-rex-64 -i --astf
  5. Use the TRex console to configure and run traffic tests. The console allows you to start traffic generation, monitor performance, and analyze results.

The TRex server supports multiple streams, dynamic filters, and enhanced route refresh capability, making it a powerful tool for testing network infrastructure functionality. By following these steps, you can deploy the TRex server effectively and leverage its advanced features for comprehensive network testing.
 

6. Configuring TRex Traffic Profiles

TRex traffic profiles are used to define the characteristics of the traffic generated by TRex. These profiles can be defined using a variety of methods, including Python scripts and XML files, providing flexibility in how you configure your tests.

TRex provides a comprehensive set of APIs for configuring traffic profiles. These APIs allow you to define traffic patterns, packet sizes, and transmission rates, giving you complete control over the traffic generated by TRex. Whether you need to simulate a simple traffic flow or a complex mix of traffic types, TRex’s APIs make it easy to customize traffic profiles to meet your specific testing requirements.

In addition to custom traffic profiles, TRex also offers a range of pre-defined traffic profiles that can be used for common testing scenarios. These profiles can be customized and extended to meet your specific needs, saving you time and effort in setting up your tests. By leveraging these pre-defined profiles, you can quickly get started with your testing and focus on analyzing the results.

With TRex’s powerful traffic generation capabilities, flexible traffic modes, and customizable traffic profiles, you have all the tools you need to thoroughly test your network infrastructure and ensure it can handle the demands of real-world traffic.
 

7. Measuring Latency and Performance

Measuring latency and performance is critical when testing network infrastructure functionality. TRex provides several tools and features for this purpose, ensuring that you can obtain accurate and detailed measurements.
 

8. Development and testing automation using TRex with Wireshark Capture Network Traffic

We wrote a lot of tests for TRex while we were developing the network router. Naturally, we considered whether we could run them automatically using python. This is what we came up with as a result:

We started the TRex server in stl mode:

cd v2.XX sudo ./t-rex-64 -i –stl

We set an environment variable for python, since TRex works in conjunction with python.

export PYTHONPATH=/home/!!!user!!!/v2.XX/automation/trex_control_plane/interactive, where “!!!user!!” is the user name and home directory, and v2.XX is version of TRex software that was downloaded and unpacked into this folder.

The we launched the traffic generator with python. The example configuration listing is shown below.

python example_test_2bidirectstream.py

Expected result:

Transmit: 10000.24576MByte/s Receive: 10000.272384MByte/s Stream 1 TX: 4487179200 Bit/s RX: 4487179200 Bit/s Stream 2 TX: 2492873600 Bit/s RX: 2492873600 Bit/s Stream 3 TX: 1994294400 Bit/s RX: 1994294400 Bit/s Stream 4 TX: 997147200 Bit/s RX: 997147200 Bit/s

Let’s break this example down:

c = STLClient(server = ‘127.0.0.1’)

Create a connection to the TRex server. In our example, the connection was created on the same machine as the server.

  • «base_pkt_dir_a, base_pkt_dir_b, base_pkt_dir_c, base_pkt_dir_d» are packet templates with source and destination addresses and source and destination ports. In this example, 4 streams are created: 2 in one direction and 2 in the opposite direction.
  • «s1, s2, s3, s4» – we queried the STLStream class for the parameters of the generated streams, such as the stream ID and bitrate. In our case, the parameters we: ID1=4.5 Gbps, ID2=2.5 Gbps, ID3=2 Gbps, ID4=1 Gbps.

The protocol plugin infrastructure allows for the implementation of various client-side L3 protocols as self-contained plugins, facilitating scalable operations and testing of network protocols.

Stream configuration file listing example_test_2bidirectstream.py
 

9. Best Practices for Network Testing

When testing network infrastructure functionality, it’s essential to follow best practices to ensure accurate and reliable results. Here are some best practices for network testing using TRex:

get TRex APIs

# get TRex APIs
from trex_stl_lib.api import *
 
c = STLClient(server = '127.0.0.1')
c.connect()
 
try:
    # create a base packet with scapy
    base_pkt_dir_a = Ether()/IP(src="16.0.0.1",dst="48.0.0.1")/UDP(dport=5001,sport=50001)
    base_pkt_dir_b = Ether()/IP(src="48.0.0.1",dst="16.0.0.1")/UDP(dport=50001,sport=5001)
 
    base_pkt_dir_c = Ether()/IP(src="16.0.0.2",dst="48.0.0.2")/UDP(dport=5002,sport=50002)
    base_pkt_dir_d = Ether()/IP(src="48.0.0.2",dst="16.0.0.2")/UDP(dport=50002,sport=5002)
 
    # pps : float
    # Packets per second
    #
    # bps_L1 : float
    # Bits per second L1 (with IPG)
    #
    # bps_L2 : float
    # Bits per second L2 (Ethernet-FCS)
    packet_size = 1400
 
    def pad(base_pkt):
        pad = (packet_size - len(base_pkt)) * 'x'
        return pad
 
    s1 = STLStream(packet=STLPktBuilder(base_pkt_dir_a/pad(base_pkt_dir_a)), mode=STLTXCont(bps_L2=4500000000), flow_stats=STLFlowStats(pg_id=1))
    s2 = STLStream(packet=STLPktBuilder(base_pkt_dir_b/pad(base_pkt_dir_b)), mode=STLTXCont(bps_L2=2500000000), flow_stats=STLFlowStats(pg_id=2))
    s3 = STLStream(packet=STLPktBuilder(base_pkt_dir_c/pad(base_pkt_dir_c)), mode=STLTXCont(bps_L2=2000000000), flow_stats=STLFlowStats(pg_id=3))
    s4 = STLStream(packet=STLPktBuilder(base_pkt_dir_d/pad(base_pkt_dir_d)), mode=STLTXCont(bps_L2=1000000000), flow_stats=STLFlowStats(pg_id=4))
 
    my_ports = [0, 1]
 
    c.reset(ports = [my_ports[0], my_ports[1]])
 
    # add the streams
    c.add_streams(s1, ports = my_ports[0])
    c.add_streams(s2, ports = my_ports[1])
    c.add_streams(s3, ports = my_ports[0])
    c.add_streams(s4, ports = my_ports[1])
 
    # start traffic with limit of 10 seconds (otherwise it will continue forever)
    # bi direction
    testduration = 10
    c.start(ports=[my_ports[0], my_ports[1]], duration=testduration)
    # hold until traffic ends
    c.wait_on_traffic()
 
    # check out the stats
    stats = c.get_stats()
 
    # get global stats
    totalstats = stats['global']
    totaltx = round(totalstats.get('tx_bps'))
    totalrx = round(totalstats.get('rx_bps'))
    print('Transmit: {}MByte/s Receive: {}MByte/s'.format((totaltx / 1000000), (totalrx / 1000000)))
    c.clear_stats(ports = [my_ports[0], my_ports[1]])
 
    # get flow stats
    totalstats = stats['flow_stats']
    stream1 = totalstats[1]
 
    stream2 = totalstats[2]
    stream3 = totalstats[3]
    stream4 = totalstats[4]
    totaltx_1 = stream1.get('tx_pkts')
    totalrx_1 = stream1.get('rx_pkts')
    print('Stream 1 TX: {} Bit/s RX: {} Bit/s'.format((totaltx_1['total'] / testduration * packet_size * 8),
                                                               (totalrx_1['total'] / testduration * packet_size * 8)))
    totaltx_2 = stream2.get('tx_pkts')
    totalrx_2 = stream2.get('rx_pkts')
    print('Stream 2 TX: {} Bit/s RX: {} Bit/s'.format((totaltx_2['total'] / testduration * packet_size * 8),
                                                               (totalrx_2['total'] / testduration * packet_size * 8)))
    totaltx_3 = stream3.get('tx_pkts')
    totalrx_3 = stream3.get('rx_pkts')
    print('Stream 3 TX: {} Bit/s RX: {} Bit/s'.format((totaltx_3['total'] / testduration * packet_size * 8),
                                                               (totalrx_3['total'] / testduration * packet_size * 8)))
    totaltx_4 = stream4.get('tx_pkts')
    totalrx_4 = stream4.get('rx_pkts')
    print('Stream 4 TX: {} Bit/s RX: {} Bit/s'.format((totaltx_4['total'] / testduration * packet_size * 8),
                                                               (totalrx_4['total'] / testduration * packet_size * 8)))
except STLError as e:
    print(e)
 
finally:
    c.disconnect()

The protocol scale of TRex is significant, as it can handle millions of routes in a short time frame, ensuring high traffic rates in dynamic network environments.

TRex also supports dynamic filters, which optimize performance by efficiently managing high-speed traffic and integrating multiple routing protocols.

Overall, the tool capabilities provided by TRex include stateful and stateless functionalities, scalability, and an automation API, making it a comprehensive solution comparable to commercial offerings.
 

Conclusion

While preparing this tutorial, we ran and tested the DUT system with 4 streams, and then collected information on the streams and global statistics.

The operation described above was run using python, which means that TRex can be used to automate testing and debugging of network devices and software products. This can be done in a loop or by running sequential tests in python.

So is TRex by Cisco better or worse than other similar traffic generators and why? Let’s take the popular client-server program iperf as an example. In the TRex use case, we described the stream setup and process. Both of the aforementioned testing and debugging tools are good in their own ways – iperf is great for running quick functionality checks on the fly, while TRex does a good job managing the automation of testing and designing network devices and complex systems for the telecom industry. In the latter case, in is important to be able to configure multithreaded streams, so that each stream can be configured for a particular task and the output then analyzed. TRex also offers cross flow support, enhancing the inspection of network traffic for specific protocols like RTSP and SIP.

TRex supports the creation of templates for almost any kind of traffic and can amplify them in order to generate large-scale DDoS attacks, including TCP-SYN, UDP and ICMP streams. The ability to generate massive traffic flows allows the user to simulate attacks from different clients and on multiple target servers. Additionally, TRex includes enhanced route refresh capability, which is crucial for managing efficient and dynamic routing processes, especially in environments handling large volumes of routing information.

So if you haven’t tried this tool out yet, keep this mind. But if you have, feel free to share your examples and feedback in the comments. It would be interesting to find out what our fellow engineers think about TRex and how they’ve used it.
 

Write to us! info@promwad.com