Robot Library

bridge_domain module

Vpp l2bd forwarding setup

Setup BD between 2 interfaces on VPP node and if learning is off set static L2FIB entry on second interface


Set Interface State  ${node}  ${if1}  up
Set Interface State  ${node}  ${if2}  up
Vpp Add L2 Bridge Domain  ${node}  ${1}  ${if1}  ${if2}  ${learn}
Run Keyword If  ${learn} == ${FALSE}  Vpp Add L2fib Entry  ${node}  ${mac}  ${if2}  ${1}
All Vpp Interfaces Ready Wait  ${nodes}

Path for 3-node BD-SHG testing is set

Compute path for bridge domain split-horizon group testing on three given nodes with following interconnections TG - (2 links) - DUT1 - (1 link) - DUT2 - (2 links) - TG and set corresponding test case variables.

Arguments: - ${tg_node} - TG node. Type: dictionary - ${dut1_node} - DUT1 node. Type: dictionary - ${dut2_node} - DUT2 node. Type: dictionary

Return: - No value returned

_NOTE:_ This KW sets following test case variables: - ${tg_node} - TG node. - ${tg_to_dut1_if1} - TG interface 1 towards DUT1. - ${tg_to_dut1_if2} - TG interface 2 towards DUT1. - ${tg_to_dut2_if1} - TG interface 1 towards DUT2. - ${tg_to_dut2_if2} - TG interface 2 towards DUT2. - ${dut1_node} - DUT1 node. - ${dut1_to_tg_if1} - DUT1 interface 1 towards TG. - ${dut1_to_tg_if2} - DUT1 interface 2 towards TG. - ${dut1_to_dut2} - DUT1 interface towards DUT2. - ${dut2_node} - DUT2 node. - ${dut2_to_tg_if1} - DUT2 interface 1 towards TG. - ${dut2_to_tg_if2} - DUT2 interface 2 towards TG. - ${dut2_to_dut1} - DUT2 interface towards DUT1.

Example:

| Given Path for 3-node BD-SHG testing is set | ${nodes[‘TG’]} | ${nodes[‘DUT1’]} | ${nodes[‘DUT2’]} |


Append Nodes  ${tg_node}  ${dut1_node}  ${tg_node}
Compute Path  always_same_link=${FALSE}
${tg_to_dut1_if1}  ${tmp}=  First Interface
${tg_to_dut1_if2}  ${tmp}=  Last Interface
${dut1_to_tg_if1}  ${tmp}=  First Ingress Interface
${dut1_to_tg_if2}  ${tmp}=  Last Egress Interface
Clear Path
Append Nodes  ${tg_node}  ${dut2_node}  ${tg_node}
Compute Path  always_same_link=${FALSE}
${tg_to_dut2_if1}  ${tmp}=  First Interface
${tg_to_dut2_if2}  ${tmp}=  Last Interface
${dut2_to_tg_if1}  ${tmp}=  First Ingress Interface
${dut2_to_tg_if2}  ${tmp}=  Last Egress Interface
Clear Path
Append Nodes  ${dut1_node}  ${dut2_node}
Compute Path
${dut1_to_dut2}  ${tmp}=  Next Interface
${dut2_to_dut1}  ${tmp}=  Next Interface
Set Test Variable  ${tg_to_dut1_if1}
Set Test Variable  ${tg_to_dut1_if2}
Set Test Variable  ${tg_to_dut2_if1}
Set Test Variable  ${tg_to_dut2_if2}
Set Test Variable  ${dut1_to_tg_if1}
Set Test Variable  ${dut1_to_tg_if2}
Set Test Variable  ${dut2_to_tg_if1}
Set Test Variable  ${dut2_to_tg_if2}
Set Test Variable  ${dut1_to_dut2}
Set Test Variable  ${dut2_to_dut1}
Set Test Variable  ${tg_node}
Set Test Variable  ${dut1_node}
Set Test Variable  ${dut2_node}

Interfaces in 3-node BD-SHG testing are up

Set UP state on interfaces in 3-node path on nodes and wait for all interfaces are ready.

Arguments: - No arguments.

Return: - No value returned.

_NOTE:_ This KW uses test variables sets in “Path for 3-node BD-SHG testing is set” KW.

Example:

| Path for 3-node BD-SHG testing is set | ${nodes[‘TG’]} | ${nodes[‘DUT1’]} | ${nodes[‘DUT2’]} | | Interfaces in 3-node BD-SHG testing are up |


Set Interface State  ${tg_node}  ${tg_to_dut1_if1}  up
Set Interface State  ${tg_node}  ${tg_to_dut1_if2}  up
Set Interface State  ${tg_node}  ${tg_to_dut2_if1}  up
Set Interface State  ${tg_node}  ${tg_to_dut2_if2}  up
Set Interface State  ${dut1_node}  ${dut1_to_tg_if1}  up
Set Interface State  ${dut1_node}  ${dut1_to_tg_if2}  up
Set Interface State  ${dut2_node}  ${dut2_to_tg_if1}  up
Set Interface State  ${dut2_node}  ${dut2_to_tg_if2}  up
Set Interface State  ${dut1_node}  ${dut1_to_dut2}  up
Set Interface State  ${dut2_node}  ${dut2_to_dut1}  up
Vpp Node Interfaces Ready Wait  ${dut1_node}
Vpp Node Interfaces Ready Wait  ${dut2_node}

Bridge domain on DUT node is created

Create bridge domain on given VPP node with defined learning status.

Arguments: - ${dut_node} - DUT node. Type: dictionary - ${bd_id} - Bridge domain ID. Type: integer - ${learn} - Enable/disable MAC learn. Type: boolean, default value: ${TRUE}

Return: - No value returned

Example:

| Bridge domain on DUT node is created | ${nodes[‘DUT1’]} | 2 | | Bridge domain on DUT node is created | ${nodes[‘DUT1’]} | 5 | learn=${FALSE} |


${learn} =  Set Variable If  ${learn} == ${TRUE}  ${1}  ${0}
Create L2 BD  ${dut_node}  ${bd_id}  learn=${learn}

Interface is added to bridge domain

Set given interface admin state to up and add this interface to required L2 bridge domain on defined VPP node.

Arguments: - ${dut_node} - DUT node. Type: dictionary - ${dut_if} - DUT node interface name. Type: string - ${bd_id} - Bridge domain ID. Type: integer - ${shg} - Split-horizon group ID. Type: integer, default value: 0

Return: - No value returned

Example:

| Interface is added to bridge domain | ${nodes[‘DUT2’]} | GigabitEthernet0/8/0 | 3 |


Set Interface State  ${dut_node}  ${dut_if}  up
Add Interface To L2 BD  ${dut_node}  ${dut_if}  ${bd_id}  ${shg}

Destination port is added to L2FIB on DUT node

Create a static L2FIB entry for required destination port on defined interface and bridge domain ID of the given VPP node.

Arguments: - ${dest_node} - Destination node. Type: dictionary - ${dest_node_if} - Destination node interface name. Type: string - ${vpp_node} - DUT node to add L2FIB entry on. Type: dictionary - ${vpp_node_if} - DUT node interface name. Type: string - ${bd_id} - Bridge domain ID. Type: integer

Return: - No value returned

Example:

| Destination port is added to L2FIB on DUT node | ${nodes[‘TG’]} | eth1 | ${nodes[‘DUT2’]} | GigabitEthernet0/8/0 | 3 |


${mac}=  Get Interface Mac  ${dest_node}  ${dest_node_if}
Vpp Add L2fib Entry  ${vpp_node}  ${mac}  ${vpp_node_if}  ${bd_id}

VM for Vhost L2BD forwarding is setup

Setup QEMU and start VM with two vhost interfaces.

Arguments: - ${dut_node} - DUT node to start VM on. Type: dictionary - ${sock1} - Socket path for first Vhost-User interface. Type: string - ${sock2} - Socket path for second Vhost-User interface. Type: string - ${qemu_name} - Qemu instance name by which the object will be accessed (Optional). Type: string

_NOTE:_ This KW sets following test case variable: - ${${qemu_name}} - VM node info. Type: dictionary

Example:

| VM for Vhost L2BD forwarding is setup | ${nodes[‘DUT1’]} | /tmp/sock1 | /tmp/sock2 | | VM for Vhost L2BD forwarding is setup | ${nodes[‘DUT2’]} | /tmp/sock1 | /tmp/sock2 | qemu_instance_2 |


Run Keyword Unless  "${qemu_name}" == "vm_node"  Import Library  resources.libraries.python.QemuUtils  WITH NAME  ${qemu_name}
Set Test Variable  ${${qemu_name}}  ${None}
${qemu_set_node}=  Run Keyword If  "${qemu_name}" == "vm_node"  Set Variable  Qemu Set Node  ELSE  Replace Variables  ${qemu_name}.Qemu Set Node
Run keyword  ${qemu_set_node}  ${dut_node}
${qemu_add_vhost}=  Run Keyword If  "${qemu_name}" == "vm_node"  Set Variable  Qemu Add Vhost User If  ELSE  Replace Variables  ${qemu_name}.Qemu Add Vhost User If
Run keyword  ${qemu_add_vhost}  ${sock1}
Run keyword  ${qemu_add_vhost}  ${sock2}
${qemu_start}=  Run Keyword If  "${qemu_name}" == "vm_node"  Set Variable  Qemu Start  ELSE  Replace Variables  ${qemu_name}.Qemu Start
${vm}=  Run keyword  ${qemu_start}
${br}=  Set Variable  br0
${vhost1}=  Get Vhost User If Name By Sock  ${vm}  ${sock1}
${vhost2}=  Get Vhost User If Name By Sock  ${vm}  ${sock2}
Linux Add Bridge  ${vm}  ${br}  ${vhost1}  ${vhost2}
Set Interface State  ${vm}  ${vhost1}  up  if_type=name
Set Interface State  ${vm}  ${vhost2}  up  if_type=name
Set Interface State  ${vm}  ${br}  up  if_type=name
Set Test Variable  ${${qemu_name}}  ${vm}

VPP Vhost interfaces for L2BD forwarding are setup

Create two Vhost-User interfaces on defined VPP node.

Arguments: - ${dut_node} - DUT node. Type: dictionary - ${sock1} - Socket path for first Vhost-User interface. Type: string - ${sock2} - Socket path for second Vhost-User interface. Type: string - ${vhost_if1} - Name of the first Vhost-User interface (Optional). Type: string - ${vhost_if2} - Name of the second Vhost-User interface (Optional). Type: string

_NOTE:_ This KW sets following test case variable: - ${${vhost_if1}} - First Vhost-User interface. - ${${vhost_if2}} - Second Vhost-User interface.

Example:

| VPP Vhost interfaces for L2BD forwarding are setup | ${nodes[‘DUT1’]} | /tmp/sock1 | /tmp/sock2 | | VPP Vhost interfaces for L2BD forwarding are setup | ${nodes[‘DUT2’]} | /tmp/sock1 | /tmp/sock2 | dut2_vhost_if1 | dut2_vhost_if2 |


${vhost_1}=  Vpp Create Vhost User Interface  ${dut_node}  ${sock1}
${vhost_2}=  Vpp Create Vhost User Interface  ${dut_node}  ${sock2}
Set Interface State  ${dut_node}  ${vhost_1}  up
Set Interface State  ${dut_node}  ${vhost_2}  up
Set Test Variable  ${${vhost_if1}}  ${vhost_1}
Set Test Variable  ${${vhost_if2}}  ${vhost_2}

counters module

Clear interface counters on all vpp nodes in topology

Clear interface counters on all VPP nodes in topology


Vpp Nodes Clear Interface Counters  ${nodes}

Vpp dump stats

Dump stats table on VPP node


Vpp Dump Stats Table  ${node}

Vpp get interface ipv6 counter

Return IPv6 statistics for node interface


${ipv6_counter}=  Vpp Get Ipv6 Interface Counter  ${node}  ${interface}

Check ipv4 interface counter

Check that ipv4 interface counter has right value


${ipv4_counter}=  Vpp get ipv4 interface counter  ${node}  ${interface}
Should Be Equal  ${ipv4_counter}  ${value}

Show statistics on all DUTs

Show VPP statistics on all DUTs


Sleep  10  Waiting for statistics to be collected
${duts}=  Get Matches  ${nodes}  DUT*
: FOR  ${dut}  IN  @{duts}
\    Vpp show stats  ${nodes['${dut}']}

Vpp show stats

Show [error, hardware, interface] stats


Vpp Show Errors  ${node}
Vpp Show Hardware Detail  ${node}
Vpp Show Runtime  ${node}

Clear all counters on all DUTs

Clear runtime, interface, hardware and error counters on all DUTs with VPP instance


Clear runtime counters on all DUTs
Clear interface counters on all DUTs
Clear hardware counters on all DUTs
Clear errors counters on all DUTs

Clear runtime counters on all DUTs

Clear VPP runtime counters on all DUTs


${duts}=  Get Matches  ${nodes}  DUT*
: FOR  ${dut}  IN  @{duts}
\    Vpp clear runtime  ${nodes['${dut}']}

Clear interface counters on all DUTs

Clear VPP interface counters on all DUTs


${duts}=  Get Matches  ${nodes}  DUT*
: FOR  ${dut}  IN  @{duts}
\    Vpp clear interface counters  ${nodes['${dut}']}

Clear hardware counters on all DUTs

Clear VPP hardware counters on all DUTs


${duts}=  Get Matches  ${nodes}  DUT*
: FOR  ${dut}  IN  @{duts}
\    Vpp clear hardware counters  ${nodes['${dut}']}

Clear errors counters on all DUTs

Clear VPP errors counters on all DUTs


${duts}=  Get Matches  ${nodes}  DUT*
: FOR  ${dut}  IN  @{duts}
\    Vpp clear errors counters  ${nodes['${dut}']}

Show runtime counters on all DUTs

Show VPP runtime counters on all DUTs


${duts}=  Get Matches  ${nodes}  DUT*
: FOR  ${dut}  IN  @{duts}
\    Vpp show runtime  ${nodes['${dut}']}

default module

Setup all DUTs before test

Setup all DUTs in topology before test execution.


Setup All DUTs  ${nodes}

Setup all TGs before traffic script

Prepare all TGs before traffic scripts execution.


All TGs Set Interface Default Driver  ${nodes}

Show Vpp Version On All DUTs

Show VPP version verbose on all DUTs.


${duts}=  Get Matches  ${nodes}  DUT*
: FOR  ${dut}  IN  @{duts}
\    Vpp show version verbose  ${nodes['${dut}']}

Show Vpp Errors On All DUTs

Show VPP errors verbose on all DUTs.


${duts}=  Get Matches  ${nodes}  DUT*
: FOR  ${dut}  IN  @{duts}
\    Vpp Show Errors  ${nodes['${dut}']}

Show Vpp Trace Dump On All DUTs

Save API trace and dump output on all DUTs.


${duts}=  Get Matches  ${nodes}  DUT*
: FOR  ${dut}  IN  @{duts}
\    Vpp api trace save  ${nodes['${dut}']}
\    Vpp api trace dump  ${nodes['${dut}']}

Show Vpp Vhost On All DUTs

Show Vhost User on all DUTs


${duts}=  Get Matches  ${nodes}  DUT*
: FOR  ${dut}  IN  @{duts}
\    Vpp Show Vhost  ${nodes['${dut}']}

Setup Scheduler Policy for Vpp On All DUTs

Set realtime scheduling policy (SCHED_RR) with priority 1 on all VPP worker threads on all DUTs.


${duts}=  Get Matches  ${nodes}  DUT*
: FOR  ${dut}  IN  @{duts}
\    Set VPP Scheduling rr  ${nodes['${dut}']}

Add worker threads and rxqueues to all DUTs

Setup worker threads and rxqueues in VPP startup configuration to all DUTs

Arguments: - ${cpu} - CPU configuration. Type: string - ${rxqueues} - rxqueues configuration. Type: string

Example:

| Add worker threads and rxqueues to all DUTs | main-core 0 | rxqueues 2


${duts}=  Get Matches  ${nodes}  DUT*
: FOR  ${dut}  IN  @{duts}
\    Add CPU config  ${nodes['${dut}']}  ${cpu}
\    Add rxqueues config  ${nodes['${dut}']}  ${rxqueues}

Add all PCI devices to all DUTs

Add all available PCI devices from topology file to VPP startup configuration to all DUTs


${duts}=  Get Matches  ${nodes}  DUT*
: FOR  ${dut}  IN  @{duts}
\    Add PCI all devices  ${nodes['${dut}']}

Add PCI device to DUT

Add PCI device to VPP startup configuration to DUT specified as argument

Arguments: - ${node} - DUT node. Type: dictionary - ${pci_address} - PCI address. Type: string

Example:

| Add PCI device to DUT | ${nodes[‘DUT1’]} | 0000:00:00.0


Add PCI device  ${node}  ${pci_address}

Add Heapsize Config to all DUTs

Add Add Heapsize Config to VPP startup configuration to all DUTs Arguments: - ${heapsize} - Heapsize string (5G, 200M, ...)


${duts}=  Get Matches  ${nodes}  DUT*
: FOR  ${dut}  IN  @{duts}
\    Add Heapsize Config  ${nodes['${dut}']}  ${heapsize}

Add No Multi Seg to all DUTs

Add No Multi Seg to VPP startup configuration to all DUTs


${duts}=  Get Matches  ${nodes}  DUT*
: FOR  ${dut}  IN  @{duts}
\    Add No Multi Seg Config  ${nodes['${dut}']}

Add Enable Vhost User to all DUTs

Add Enable Vhost User to VPP startup configuration to all DUTs


${duts}=  Get Matches  ${nodes}  DUT*
: FOR  ${dut}  IN  @{duts}
\    Add Enable Vhost User Config  ${nodes['${dut}']}

Remove startup configuration of VPP from all DUTs

Remove VPP startup configuration from all DUTs


${duts}=  Get Matches  ${nodes}  DUT*
: FOR  ${dut}  IN  @{duts}
\    Remove All PCI Devices  ${nodes['${dut}']}
\    Remove All CPU Config  ${nodes['${dut}']}
\    Remove Socketmem Config  ${nodes['${dut}']}
\    Remove Heapsize Config  ${nodes['${dut}']}
\    Remove Rxqueues Config  ${nodes['${dut}']}
\    Remove No Multi Seg Config  ${nodes['${dut}']}
\    Remove Enable Vhost User Config  ${nodes['${dut}']}

Setup default startup configuration of VPP on all DUTs

Setup default startup configuration of VPP to all DUTs


Remove startup configuration of VPP from all DUTs
Add '1' worker threads and rxqueues '1' in 3-node single-link topo
Add all PCI devices to all DUTs
Apply startup configuration on all VPP DUTs

Apply startup configuration on all VPP DUTs

Apply startup configuration of VPP and restart VPP on all DUTs


${duts}=  Get Matches  ${nodes}  DUT*
: FOR  ${dut}  IN  @{duts}
\    Apply Config  ${nodes['${dut}']}
Update All Interface Data On All Nodes  ${nodes}  skip_tg=${TRUE}

Save VPP PIDs

Get PIDs of VPP processes from all DUTs in topology and set it as a test variable. The PIDs are stored as dictionary items where the key is the host and the value is the PID.


${setup_vpp_pids}=  Get VPP PIDs  ${nodes}
Set Test Variable  ${setup_vpp_pids}

Check VPP PID in Teardown

Check if the VPP PIDs on all DUTs are the same at the end of test as they were at the begining. If they are not, only a message is printed on console and to log. The test will not fail.


${teardown_vpp_pids}=  Get VPP PIDs  ${nodes}
${err_msg}=  Catenate  \nThe VPP PIDs are not equal!\nTest Setup VPP PIDs:  ${setup_vpp_pids}\nTest Teardown VPP PIDs: ${teardown_vpp_pids}
${rc}  ${msg}=  Run keyword and ignore error  Dictionaries Should Be Equal  ${setup_vpp_pids}  ${teardown_vpp_pids}
Run Keyword And Return If  '${rc}'=='FAIL'  Log  ${err_msg}  console=yes  level=WARN

Func Test Setup

Common test setup for functional tests.


Setup all DUTs before test
Save VPP PIDs
Setup all TGs before traffic script
Update All Interface Data On All Nodes  ${nodes}

Func Test Teardown

Common test teardown for functional tests.


Show Packet Trace on All DUTs  ${nodes}
Show vpp trace dump on all DUTs
Vpp Show Errors On All DUTs  ${nodes}
Check VPP PID in Teardown

dhcp_client module

Check DHCP DISCOVER header

Check if DHCP DISCOVER message contains all required fields.

Arguments: - tg_node - TG node. Type: dictionary - interface - TG interface where listen for DHCP DISCOVER message. Type: string - src_mac - DHCP client MAC address. Type: string - hostname - DHCP client hostname (Optional, Default=””, if not specified, the hostname is not checked). Type: string

Return: - No value returned.

Example:

| Check DHCP DISCOVER header | ${nodes[‘TG’]} | eth2 | 08:00:27:66:b8:57 | | Check DHCP DISCOVER header | ${nodes[‘TG’]} | eth2 | 08:00:27:66:b8:57 | client-hostname |


${interface_name}=  Get interface name  ${tg_node}  ${interface}
${args}=  Catenate  --rx_if  ${interface_name}  --rx_src_mac  ${src_mac}
${args}=  Run Keyword If  "${hostname}" == ""  Set Variable  ${args}  ELSE  Catenate  ${args}  --hostname  ${hostname}
Run Traffic Script On Node  dhcp/check_dhcp_discover.py  ${tg_node}  ${args}

Check DHCP REQUEST after OFFER

Check if DHCP REQUEST message contains all required fields. DHCP REQUEST should be send by a client after DHCP OFFER message sent by a server.

Arguments: - tg_node - TG node. Type: dictionary - tg_interface - TG interface where listen for DHCP DISCOVER, send DHCP OFFER and listen for DHCP REQUEST messages. Type: string - server_mac - DHCP server MAC address. Type: string - server_ip - DHCP server IP address. Type: string - client_mac - DHCP client MAC address. Type: string - client_ip - IP address that should be offered to client. Type: string - client_mask - IP netmask that should be offered to client. Type: string - hostname - DHCP client hostname (Optional, Default=””, if not specified, the hostname is not checked). Type: string - offer_xid - Transaction ID (Optional, Default=””, if not specified xid field in DHCP OFFER is same as in DHCP DISCOVER message). Type: integer

Return: - No value returned.

Raises: - DHCP REQUEST Rx timeout - if no DHCP REQUEST is received.

Example:

| Check DHCP REQUEST after OFFER | ${nodes[‘TG’]} | eth2 | 08:00:27:66:b8:57 | 192.168.23.1 | 08:00:27:46:2b:4c | 192.168.23.10 | 255.255.255.0 |

| Run Keyword And Expect Error | DHCP REQUEST Rx timeout | Check DHCP REQUEST after OFFER | ${nodes[‘TG’]} | eth2 | 08:00:27:66:b8:57 | 192.168.23.1 | 08:00:27:46:2b:4c | 192.168.23.10 | 255.255.255.0 | offer_xid=11113333 |


${tg_interface_name}=  Get interface name  ${tg_node}  ${tg_interface}
${args}=  Catenate  --rx_if  ${tg_interface_name}  --server_mac  ${server_mac}  --server_ip  ${server_ip}  --client_mac  ${client_mac}  --client_ip  ${client_ip}  --client_mask  ${client_mask}
${args}=  Run Keyword If  "${hostname}" == ""  Set Variable  ${args}  ELSE  Catenate  ${args}  --hostname  ${hostname}
${args}=  Run Keyword If  "${offer_xid}" == ""  Set Variable  ${args}  ELSE  Catenate  ${args}  --offer_xid  ${offer_xid}
Run Traffic Script On Node  dhcp/check_dhcp_request.py  ${tg_node}  ${args}

Send IP configuration to client via DHCP

Run script that sends IP configuration to the DHCP client.

Arguments: - tg_node - TG node. Type: dictionary - tg_interface - TG interface where listen for DHCP DISCOVER, send DHCP OFFER and DHCP ACK after DHCP REQUEST messages. Type: string - server_mac - DHCP server MAC address. Type: string - server_ip - DHCP server IP address. Type: string - client_ip - IP address that is offered to client. Type: string - client_mask - IP netmask that is offered to client. Type: string - lease_time - IP lease time in seconds. Type: integer

Return: - No value returned.

Example:

| Send IP configuration to client via DHCP | ${nodes[‘TG’]} | eth2 | 08:00:27:66:b8:57 | 192.168.23.1 | 192.168.23.10 | 255.255.255.0 | 86400 |


${tg_interface_name}=  Get interface name  ${tg_node}  ${tg_interface}
${args}=  Catenate  --rx_if  ${tg_interface_name}  --server_mac  ${server_mac}  --server_ip  ${server_ip}  --client_ip  ${client_ip}  --client_mask  ${client_mask}  --lease_time  ${lease_time}
Run Traffic Script On Node  dhcp/check_dhcp_request_ack.py  ${tg_node}  ${args}

dhcp_proxy module

Send DHCP Messages

Send and receive DHCP messages between client and server through DHCP proxy.

Arguments: - tg_node - TG node. Type: dictionary - tg_interface1 - TG interface. Type: string - tg_interface2 - TG interface. Type: string - server_ip - DHCP server IP address. Type: string - server_mac - DHCP server MAC address. Type: string - client_ip - Client IP address. Type: string - client_mac - Client MAC address. Type: string - proxy_ip - DHCP proxy IP address. Type: string

Return: - No value returned.

Example:

| Send DHCP Messages | ${nodes[‘TG’]} | eth3 | eth4 | 192.168.0.100 | 08:00:27:cc:4f:54 | 172.16.0.2 | 08:00:27:64:18:d2 | 172.16.0.1 |


${tg_interface_name1}=  Get interface name  ${tg_node}  ${tg_interface1}
${tg_interface_name2}=  Get interface name  ${tg_node}  ${tg_interface2}
${args}=  Catenate  --tx_if  ${tg_interface_name1}  --rx_if  ${tg_interface_name2}  --server_ip  ${server_ip}  --server_mac  ${server_mac}  --client_ip  ${client_ip}  --client_mac  ${client_mac}  --proxy_ip  ${proxy_ip}
Run Traffic Script On Node  dhcp/send_and_check_proxy_messages.py  ${tg_node}  ${args}

Send DHCP DISCOVER

Send and receive DHCP DISCOVER.

Arguments: - tg_node - TG node. Type: dictionary - tg_interface1 - TG interface. Type: string - tg_interface2 - TG interface. Type: string - tx_src_ip - Source address of DHCP DISCOVER packet. Type: string - tx_dst_ip - Destination address of DHCP DISCOVER packet. Type: string

Return: - No value returned.

Example:

| Send DHCP DISCOVER | ${nodes[‘TG’]} | eth3 | eth4 | 0.0.0.0 | 255.255.255.255 |


${tg_interface_name1}=  Get interface name  ${tg_node}  ${tg_interface1}
${tg_interface_name2}=  Get interface name  ${tg_node}  ${tg_interface2}
${args}=  Catenate  --tx_if  ${tg_interface_name1}  --rx_if  ${tg_interface_name2}  --tx_src_ip  ${tx_src_ip}  --tx_dst_ip  ${tx_dst_ip}
Run Traffic Script On Node  dhcp/send_and_check_proxy_discover.py  ${tg_node}  ${args}

Send DHCP DISCOVER should fail

Send and receive DHCP DISCOVER should fail.

Arguments: - tg_node - TG node. Type: dictionary - tg_interface1 - TG interface. Type: string - tg_interface2 - TG interface. Type: string - tx_src_ip - Source address of DHCP DISCOVER packet. Type: string - tx_dst_ip - Destination address of DHCP DISCOVER packet. Type: string

Return: - No value returned.

Example:

| Send DHCP DISCOVER should fail | ${nodes[‘TG’]} | eth3 | eth4 | 0.0.0.0 | 255.255.255.1 |


${tg_interface_name1}=  Get interface name  ${tg_node}  ${tg_interface1}
${tg_interface_name2}=  Get interface name  ${tg_node}  ${tg_interface2}
${args}=  Catenate  --tx_if  ${tg_interface_name1}  --rx_if  ${tg_interface_name2}  --tx_src_ip  ${tx_src_ip}  --tx_dst_ip  ${tx_dst_ip}
Run Keyword And Expect Error  DHCP DISCOVER Rx timeout  Run Traffic Script On Node  dhcp/send_and_check_proxy_discover.py  ${tg_node}  ${args}

Send DHCPv6 Messages

Send and receive DHCPv6 messages between client and server through DHCPv6 proxy.

Arguments: - tg_node - TG node. Type: dictionary - tg_interface1 - TG interface. Type: string - tg_interface2 - TG interface. Type: string - proxy_ip - DHCPv6 proxy IP address. Type: string - proxy_mac - Proxy MAC address. Type: string - server_ip - DHCPv6 server IP address. Type: string - server_mac - Server MAC address. Type: string - client_mac - Client MAC address. Type: string - proxy_to_server_mac - MAC address of proxy interface connected to server. Type: string

Return: - No value returned.

Example:

| Send DHCPv6 Messages | ${nodes[‘TG’]} | eth3 | eth4 | 3ffe:62::1 | 08:00:27:54:59:f9 | 3ffe:63::2 | 08:00:27:cc:4f:54 | | 08:00:27:64:18:d2 | 08:00:27:c9:6a:d5 |


${tg_interface_name1}=  Get interface name  ${tg_node}  ${tg_interface1}
${tg_interface_name2}=  Get interface name  ${tg_node}  ${tg_interface2}
${args}=  Catenate  --tx_if  ${tg_interface_name1}  --rx_if  ${tg_interface_name2}  --proxy_ip  ${proxy_ip}  --proxy_mac  ${proxy_mac}  --server_ip  ${server_ip}  --server_mac  ${server_mac}  --client_mac  ${client_mac}  --proxy_to_server_mac  ${proxy_to_server_mac}
Run Traffic Script On Node  dhcp/send_dhcp_v6_messages.py  ${tg_node}  ${args}

double_qemu_setup module

Setup QEMU Vhost and Run

Setup Qemu with 4 vhost-user interfaces and 4 namespaces. Each call will be different object instance.

Arguments: - dut_node - Node where to setup qemu. Type: dict - sock1 - Socket path for first Vhost-User interface. Type: string - sock2 - Socket path for second Vhost-User interface. Type: string - sock3 - Socket path for third Vhost-User interface. Type: string - sock4 - Socket path for forth Vhost-User interface. Type: string - ip1 - IP address for namespace 1. Type: string - ip2 - IP address for namespace 2. Type: string - ip3 - IP address for namespace 3. Type: string - ip4 - IP address for namespace 4. Type: string - prefix_length - IP prefix length. Type: int - qemu_name - Qemu instance name by which the object will be accessed. Type: string - mac_ID - MAC address ID used to differentiate qemu instances and namespaces assigned to them. Type: string

Example:

| Setup QEMU Vhost And Run| {nodes[‘DUT1’]} | /tmp/sock1 | /tmp/sock2 | /tmp/sock3 | /tmp/sock4 | 16.0.0.1 | 16.0.0.2 | 16.0.0.3 | 16.0.0.4 | 24 | qemu_instance_1 | 06 |


Import Library  resources.libraries.python.QemuUtils \  WITH NAME  ${qemu_name}
${qemu_add_vhost}=  Replace Variables  ${qemu_name}.Qemu Add Vhost User If
${qemu_set_node}=  Replace Variables  ${qemu_name}.Qemu Set Node
${qemu_start}=  Replace Variables  ${qemu_name}.Qemu Start
Run keyword  ${qemu_add_vhost}  ${sock1}  mac=52:54:00:00:${mac_ID}:01
Run keyword  ${qemu_add_vhost}  ${sock2}  mac=52:54:00:00:${mac_ID}:02
Run keyword  ${qemu_add_vhost}  ${sock3}  mac=52:54:00:00:${mac_ID}:03
Run keyword  ${qemu_add_vhost}  ${sock4}  mac=52:54:00:00:${mac_ID}:04
Run keyword  ${qemu_set_node}  ${dut_node}
${vm}=  Run keyword  ${qemu_start}
${vhost1}=  Get Vhost User If Name By Sock  ${vm}  ${sock1}
${vhost2}=  Get Vhost User If Name By Sock  ${vm}  ${sock2}
${vhost3}=  Get Vhost User If Name By Sock  ${vm}  ${sock3}
${vhost4}=  Get Vhost User If Name By Sock  ${vm}  ${sock4}
Set Interface State  ${vm}  ${vhost1}  up  if_type=name
Set Interface State  ${vm}  ${vhost2}  up  if_type=name
Set Interface State  ${vm}  ${vhost3}  up  if_type=name
Set Interface State  ${vm}  ${vhost4}  up  if_type=name
Setup Network Namespace  ${vm}  nmspace1  ${vhost1}  ${ip1}  ${prefix_length}
Setup Network Namespace  ${vm}  nmspace2  ${vhost2}  ${ip2}  ${prefix_length}
Setup Network Namespace  ${vm}  nmspace3  ${vhost3}  ${ip3}  ${prefix_length}
Setup Network Namespace  ${vm}  nmspace4  ${vhost4}  ${ip4}  ${prefix_length}
Set Test Variable  ${${qemu_name}}  ${vm}

Qemu Teardown

Stop specific qemu instance running on ${dut_node}, ${vm} is VM node info dictionary returned by qemu_start or None. Arguments: - dut_node - Node where to clean qemu. Type: dict - vm - VM node info dictionary. Type: string - qemu_name - Qemu instance by name. Type: string

Example:

| Qemu Teardown | ${node[‘DUT1’]} | ${vm} | qemu_node_1 |


${set_node}=  Replace Variables  ${qemu_name}.Qemu Set Node
${kill}=  Replace Variables  ${qemu_name}.Qemu Kill
${clear_socks}=  Replace Variables  ${qemu_name}.Qemu Clear Socks
Run Keyword  ${set_node}  ${dut_node}
Run Keyword  ${kill}
Run Keyword  ${clear_socks}
Run Keyword If  ${vm} is not None  Disconnect  ${vm}

gre module

GRE tunnel interface is created and up

Create GRE tunnel interface on defined VPP node and put the interface to UP state.

Arguments: - dut_node - DUT node where to create GRE tunnel. Type: dictionary - source_ip_address - GRE tunnel source IP address. Type: string - destination_ip_address - GRE tunnel destination IP address. Type: string

Return: - name - Name of created GRE tunnel interface. Type: string - index - SW interface index of created GRE tunnel interface. Type: integer

Example:

| ${gre_name} | ${gre_index}= | GRE tunnel interface is created and up | ${dut} | 192.0.1.1 | 192.0.1.2 |


${name}  ${index}=  Create GRE Tunnel Interface  ${dut_node}  ${source_ip_address}  ${destination_ip_address}
Set Interface State  ${dut_node}  ${index}  up

Send ICMPv4 and check received GRE header

Send ICMPv4 packet and check if received packed contains correct GRE, IP, MAC headers.

Arguments: - tg_node - Node where to run traffic script. Type: dictionary - tx_if - Interface from where send ICPMv4 packet. Type: string - rx_if - Interface where to receive GRE packet. Type: string - tx_dst_mac - Destination MAC address of ICMP packet. Type: string - rx_dst_mac - Expected destination MAC address of GRE packet. Type: string - inner_src_ip - Source IP address of ICMP packet. Type: string - inner_dst_ip - Destination IP address of ICMP packet. Type: string - outer_src_ip - Source IP address of GRE packet. Type: string - outer_dst_ip - Destination IP address of GRE packet. Type: string

Return: - No value returned

Example:

| Send ICMPv4 and check received GRE header | ${tg_node} | ${tg_to_dut_if1} | ${tg_to_dut_if2} | ${tx_dst_mac} | ${rx_dst_mac} | ${net1_host_address} | ${net2_host_address} | ${dut1_ip_address} | ${dut2_ip_address} |


${tx_if_name}=  Get interface name  ${tg_node}  ${tx_if}
${rx_if_name}=  Get interface name  ${tg_node}  ${rx_if}
${args}=  Catenate  --tx_if  ${tx_if_name}  --rx_if  ${rx_if_name}  --tx_dst_mac  ${tx_dst_mac}  --rx_dst_mac  ${rx_dst_mac}  --inner_src_ip  ${inner_src_ip}  --inner_dst_ip  ${inner_dst_ip}  --outer_src_ip  ${outer_src_ip}  --outer_dst_ip  ${outer_dst_ip}
Run Traffic Script On Node  send_icmp_check_gre_headers.py  ${tg_node}  ${args}

Send GRE and check received ICMPv4 header

Send IPv4 ICMPv4 packet encapsulated into GRE and check IP, MAC headers on received packed.

Arguments: - tg_node - Node where to run traffic script. Type: dictionary - tx_if - Interface from where send ICPMv4 packet. Type: string - rx_if - Interface where receive GRE packet. Type: string - tx_dst_mac - Destination MAC address of GRE packet. Type: string - rx_dst_mac - Expected destination MAC address of ICMP packet. Type: string - inner_src_ip - Source IP address of ICMP packet. Type: string - inner_dst_ip - Destination IP address of ICMP packet. Type: string - outer_src_ip - Source IP address of GRE packet. Type: string - outer_dst_ip - Destination IP address of GRE packet. Type: string

Return: - No value returned

Example:

| Send GRE and check received ICMPv4 header | ${tg_node} | ${tg_to_dut_if2} | ${tg_to_dut_if1} | ${tx_dst_mac} | ${rx_dst_mac} | ${net2_host_address} | ${net1_host_address} | ${dut2_ip_address} | ${dut1_ip_address} |


${tx_if_name}=  Get interface name  ${tg_node}  ${tx_if}
${rx_if_name}=  Get interface name  ${tg_node}  ${rx_if}
${args}=  Catenate  --tx_if  ${tx_if_name}  --rx_if  ${rx_if_name}  --tx_dst_mac  ${tx_dst_mac}  --rx_dst_mac  ${rx_dst_mac}  --inner_src_ip  ${inner_src_ip}  --inner_dst_ip  ${inner_dst_ip}  --outer_src_ip  ${outer_src_ip}  --outer_dst_ip  ${outer_dst_ip}
Run Traffic Script On Node  send_gre_check_icmp_headers.py  ${tg_node}  ${args}

Send GRE and check received GRE header

Send IPv4 UDP packet encapsulated into GRE and check if received packed contains correct MAC GRE, IP, UDP headers.

Arguments: - tg_node - Node where to run traffic script. Type: dictionary - tx_if - Interface from where send GRE packet. Type: string - rx_if - Interface where to receive GRE packet. Type: string - tx_dst_mac - Destination MAC address of transferred packet. Type: string - tx_src_mac - Source MAC address of transferred packet. Type: string - tx_outer_dst_ip - Destination IP address of GRE packet. Type: string - tx_outer_src_ip - Source IP address of GRE packet. Type: string - tx_inner_dst_ip - Destination IP address of UDP packet. Type: string - tx_inner_src_ip - Source IP address of UDP packet. Type: string - rx_dst_mac - Expected destination MAC address. Type: string - rx_src_mac - Expected source MAC address. Type: string - rx_outer_dst_ip - Expected destination IP address of received GRE packet. Type: string - rx_outer_src_ip - Expected source IP address of received GRE packet. Type: string

__Note:__ rx_inner_dst_ip and rx_inner_src_ip should be same as transferred

Return: - No value returned

Example: | Send GRE and check received GRE header | ${tg_node} | port3 | port3 | 08:00:27:f3:be:f0 | 08:00:27:46:2b:4c | 10.0.0.1 | 10.0.0.2 | 192.168.3.100 | 192.168.2.100 | 08:00:27:46:2b:4c | 08:00:27:f3:be:f0 | 10.0.0.3 | 10.0.0.1 |


${tx_if_name}=  Get interface name  ${tg_node}  ${tx_if}
${rx_if_name}=  Get interface name  ${tg_node}  ${rx_if}
${args}=  Catenate  --tx_if  ${tx_if_name}  --rx_if  ${rx_if_name}  --tx_dst_mac  ${tx_dst_mac}  --tx_src_mac  ${tx_src_mac}  --tx_outer_dst_ip  ${tx_outer_dst_ip}  --tx_outer_src_ip  ${tx_outer_src_ip}  --tx_inner_dst_ip  ${tx_inner_dst_ip}  --tx_inner_src_ip  ${tx_inner_src_ip}  --rx_dst_mac  ${rx_dst_mac}  --rx_src_mac  ${rx_src_mac}  --rx_outer_dst_ip  ${rx_outer_dst_ip}  --rx_outer_src_ip  ${rx_outer_src_ip}
Run Traffic Script On Node  send_gre_check_gre_headers.py  ${tg_node}  ${args}

interfaces module

VPP reports interfaces on


VPP reports interfaces through VAT on  ${node}

Setup MTU on TG based on MTU on DUT

Type of the tg_node must be TG and dut_node must be DUT


Append Nodes  ${tg_node}  ${dut_node}
Compute Path
${tg_port}  ${tg_node}=  First Interface
${dut_port}  ${dut_node}=  Last Interface
${mtu}=  Get Interface MTU  ${dut_node}  ${dut_port}
${eth_mtu}=  Evaluate  ${mtu} - 14 - 4
Set Interface Ethernet MTU  ${tg_node}  ${tg_port}  ${eth_mtu}

ipsec module

IPsec Generate Keys

Generate keys for IPsec.

Arguments: - crypto_alg - Encryption algorithm. Type: enum - integ_alg - Integrity algorithm. Type: enum

_NOTE:_ This KW sets following test case variable: - encr_key - Encryption key. Type: string - auth_key - Integrity key. Type: string

Example: | ${encr_alg}= | Crypto Alg AES CBC 128 | | ${auth_alg}= | Integ Alg SHA1 96 | | IPsec Generate Keys | ${encr_alg} | ${auth_alg} |


${encr_key_len}=  Get Crypto Alg Key Len  ${crypto_alg}
${encr_key}=  Generate Random String  ${encr_key_len}
${auth_key_len}=  Get Integ Alg Key Len  ${integ_alg}
${auth_key}=  Generate Random String  ${auth_key_len}
Set Test Variable  ${encr_key}
Set Test Variable  ${auth_key}

Setup Path for IPsec testing

Setup path for IPsec testing TG<–>DUT1.

_NOTE:_ This KW sets following test case variable: - tg_node - TG node. Type: dictionary - tg_if - TG interface connected to DUT. Type: string - tg_if_mac - TG interface MAC. Type: string - dut_node - DUT node. Type: dictionary - dut_if - DUT interface connected to TG. Type: string - dut_if_mac - DUT interface MAC. Type: string - dut_lo - DUT loopback interface. Type: string

Example: | Setup Path for IPsec testing |


Append Nodes  ${nodes['TG']}  ${nodes['DUT1']}
Compute Path
${tg_if}  ${tg_node}=  Next Interface
${dut_if}  ${dut_node}=  Next Interface
${dut_if_mac}=  Get Interface Mac  ${dut_node}  ${dut_if}
${tg_if_mac}=  Get Interface Mac  ${tg_node}  ${tg_if}
${dut_lo}=  Vpp Create Loopback  ${dut_node}
Set Interface State  ${dut_node}  ${dut_if}  up
Set Interface State  ${dut_node}  ${dut_lo}  up
Vpp Node Interfaces Ready Wait  ${dut_node}
Set Test Variable  ${tg_node}
Set Test Variable  ${tg_if}
Set Test Variable  ${tg_if_mac}
Set Test Variable  ${dut_node}
Set Test Variable  ${dut_if}
Set Test Variable  ${dut_if_mac}
Set Test Variable  ${dut_lo}

Setup Topology for IPv4 IPsec testing

Setup topology for IPv4 IPsec testing.

_NOTE:_ This KW sets following test case variable: - dut_tun_ip - DUT tunnel IP address. Type: string - dut_src_ip - DUT source IP address. Type: string - tg_tun_ip - TG tunnel IP address. Type: string - tg_src_ip - TG source IP address. Type: string

Example: | Setup Topology for IPv4 IPsec testing |


Setup Path for IPsec testing
Set Interface Address  ${dut_node}  ${dut_if}  ${dut_if_ip4}  ${ip4_plen}
Set Interface Address  ${dut_node}  ${dut_lo}  ${dut_lo_ip4}  ${ip4_plen}
dut1_v4.Set Arp  ${dut_if}  ${tg_if_ip4}  ${tg_if_mac}
Vpp Route Add  ${dut_node}  ${tg_lo_ip4}  ${ip4_plen}  ${tg_if_ip4}  ${dut_if}
Set Test Variable  ${dut_tun_ip}  ${dut_if_ip4}
Set Test Variable  ${dut_src_ip}  ${dut_lo_ip4}
Set Test Variable  ${tg_tun_ip}  ${tg_if_ip4}
Set Test Variable  ${tg_src_ip}  ${tg_lo_ip4}

Setup Topology for IPv6 IPsec testing

Setup topology fo IPv6 IPsec testing.

_NOTE:_ This KW sets following test case variable: - dut_tun_ip - DUT tunnel IP address. Type: string - dut_src_ip - DUT source IP address. Type: string - tg_tun_ip - TG tunnel IP address. Type: string - tg_src_ip - TG source IP address. Type: string

Example: | Setup Topology for IPv6 IPsec testing |


Setup Path for IPsec testing
VPP Set If IPv6 Addr  ${dut_node}  ${dut_if}  ${dut_if_ip6}  ${ip6_plen}
VPP Set If IPv6 Addr  ${dut_node}  ${dut_lo}  ${dut_lo_ip6}  ${ip6_plen}
Add IP Neighbor  ${dut_node}  ${dut_if}  ${tg_if_ip6}  ${tg_if_mac}
Vpp All RA Suppress Link Layer  ${nodes}
Vpp Route Add  ${dut_node}  ${tg_lo_ip6}  ${ip6_plen_rt}  ${tg_if_ip6}  ${dut_if}
Set Test Variable  ${dut_tun_ip}  ${dut_if_ip6}
Set Test Variable  ${dut_src_ip}  ${dut_lo_ip6}
Set Test Variable  ${tg_tun_ip}  ${tg_if_ip6}
Set Test Variable  ${tg_src_ip}  ${tg_lo_ip6}

VPP Setup IPsec Manual Keyed Connection

Setup IPsec manual keyed connection on VPP node.

Arguments: - node - VPP node to setup IPsec on. Type: dictionary - interface - Interface to enable IPsec on. Type: string - crypto_alg - Encrytion algorithm. Type: enum - crypto_key - Encryption key. Type: string - integ_alg - Integrity algorithm. Type: enum - integ_key - Integrity key. Type: string - l_spi - Local SPI. Type: integer - r_spi - Remote SPI. Type: integer - l_ip - Local IP address. Type: string - r_ip - Remote IP address. Type: string - l_tunnel - Local tunnel IP address (optional). Type: string - r_tunnel - Remote tunnel IP address (optional). Type: string

_NOTE:_ This KW sets following test case variables: - l_sa_id - r_sa_id

Example: | ${encr_alg}= | Crypto Alg AES CBC 128 | | ${auth_alg}= | Integ Alg SHA1 96 | | VPP Setup IPsec Manual Keyed Connection | ${nodes[‘DUT1’]} | GigabitEthernet0/8/0 | ${encr_alg} | sixteenbytes_key | ${auth_alg} | twentybytessecretkey | ${1000} | ${1001} | 192.168.4.4 | 192.168.3.3 | 192.168.100.3 | 192.168.100.2 |


Set Test Variable  ${l_sa_id}  ${10}
Set Test Variable  ${r_sa_id}  ${20}
${spd_id}=  Set Variable  ${1}
${p_hi}=  Set Variable  ${100}
${p_lo}=  Set Variable  ${10}
VPP IPsec Add SAD Entry  ${node}  ${l_sa_id}  ${l_spi}  ${crypto_alg}  ${crypto_key}  ${integ_alg}  ${integ_key}  ${l_tunnel}  ${r_tunnel}
VPP IPsec Add SAD Entry  ${node}  ${r_sa_id}  ${r_spi}  ${crypto_alg}  ${crypto_key}  ${integ_alg}  ${integ_key}  ${r_tunnel}  ${l_tunnel}
VPP IPsec Add SPD  ${node}  ${spd_id}
VPP IPsec SPD Add If  ${node}  ${spd_id}  ${interface}
${action}=  Policy Action Bypass
VPP IPsec SPD Add Entry  ${node}  ${spd_id}  ${p_hi}  ${action}  inbound=${TRUE}  proto=${ESP_PROTO}
VPP IPsec SPD Add Entry  ${node}  ${spd_id}  ${p_hi}  ${action}  inbound=${FALSE}  proto=${ESP_PROTO}
${action}=  Policy Action Protect
VPP IPsec SPD Add Entry  ${node}  ${spd_id}  ${p_lo}  ${action}  sa_id=${r_sa_id}  laddr_range=${l_ip}  raddr_range=${r_ip}  inbound=${TRUE}
VPP IPsec SPD Add Entry  ${node}  ${spd_id}  ${p_lo}  ${action}  sa_id=${l_sa_id}  laddr_range=${l_ip}  raddr_range=${r_ip}  inbound=${FALSE}

VPP Update IPsec SA Keys

Update IPsec SA keys on VPP node.

Arguments: - node - VPP node to update SA keys. Type: dictionary - l_sa_id - Local SA ID. Type: string - r_sa_id - Remote SA ID. Type: string - crypto_key - Encryption key. Type: string - integ_key - Integrity key. Type: string

Example: | VPP Update IPsec SA Keys | ${nodes[‘DUT1’]} | 10 | 20 | sixteenbytes_key | twentybytessecretkey |


VPP IPsec SA Set Key  ${dut_node}  ${l_sa_id}  ${crypto_key}  ${integ_key}
VPP IPsec SA Set Key  ${dut_node}  ${r_sa_id}  ${crypto_key}  ${integ_key}

Send and Receive IPsec Packet

Send IPsec packet from TG to DUT. Receive IPsec packetfrom DUT on TG and verify ESP encapsulation.

Arguments: - node - TG node. Type: dictionary - interface - TG Interface. Type: string - dst_mac - Destination MAC. Type: string - crypto_alg - Encrytion algorithm. Type: enum - crypto_key - Encryption key. Type: string - integ_alg - Integrity algorithm. Type: enum - integ_key - Integrity key. Type: string - l_spi - Local SPI. Type: integer - r_spi - Remote SPI. Type: integer - l_ip - Local IP address. Type: string - r_ip - Remote IP address. Type: string - l_tunnel - Local tunnel IP address (optional). Type: string - r_tunnel - Remote tunnel IP address (optional). Type: string

Example: | ${encr_alg}= | Crypto Alg AES CBC 128 | | ${auth_alg}= | Integ Alg SHA1 96 | | Send and Receive IPsec Packet | ${nodes[‘TG’]} | eth1 | 52:54:00:d4:d8:22 | ${encr_alg} | sixteenbytes_key | ${auth_alg} | twentybytessecretkey | ${1001} | ${1000} | 192.168.3.3 | 192.168.4.4 | 192.168.100.2 | 192.168.100.3 |


${src_mac}=  Get Interface Mac  ${node}  ${interface}
${if_name}=  Get Interface Name  ${node}  ${interface}
${args}=  Traffic Script Gen Arg  ${if_name}  ${if_name}  ${src_mac}  ${dst_mac}  ${l_ip}  ${r_ip}
${crypto_alg_str}=  Get Crypto Alg Scapy Name  ${crypto_alg}
${integ_alg_str}=  Get Integ Alg Scapy Name  ${integ_alg}
${args}=  Catenate  ${args}  --crypto_alg ${crypto_alg_str}  --crypto_key ${crypto_key}  --integ_alg ${integ_alg_str}  --integ_key ${integ_key}  --l_spi ${l_spi}  --r_spi ${r_spi}
${args}=  Set Variable If  "${l_tunnel}" == "${None}"  ${args}  ${args} --src_tun ${l_tunnel}
${args}=  Set Variable If  "${r_tunnel}" == "${None}"  ${args}  ${args} --dst_tun ${r_tunnel}
Run Traffic Script On Node  ipsec.py  ${node}  ${args}

ipv4 module

Setup IPv4 adresses on all DUT nodes in topology

Setup IPv4 address on all DUTs in topology


${interfaces}=  VPP nodes set ipv4 addresses  ${nodes}  ${nodes_addr}
: FOR  ${interface}  IN  @{interfaces}
\    Set Interface State  @{interface}  up  if_type=name

Routes are set up for IPv4 testing

Setup routing on all VPP nodes required for IPv4 tests


Append Nodes  ${nodes['DUT1']}  ${nodes['DUT2']}
Compute Path
${tg}=  Set Variable  ${nodes['TG']}
${dut1_if}  ${dut1}=  First Interface
${dut2_if}  ${dut2}=  Last Interface
${dut1_if_addr}=  Get IPv4 address of node "${dut1}" interface "${dut1_if}" from "${nodes_addr}"
${dut2_if_addr}=  Get IPv4 address of node "${dut2}" interface "${dut2_if}" from "${nodes_addr}"
@{tg_dut1_links}=  Get active links connecting "${tg}" and "${dut1}"
@{tg_dut2_links}=  Get active links connecting "${tg}" and "${dut2}"
: FOR  ${link}  IN  @{tg_dut1_links}
\    ${net}=  Get Link Address  ${link}  ${nodes_addr}
\    ${prefix}=  Get Link Prefix  ${link}  ${nodes_addr}
\    Vpp Route Add  ${dut2}  ${net}  ${prefix}  ${dut1_if_addr}  ${dut2_if}
: FOR  ${link}  IN  @{tg_dut2_links}
\    ${net}=  Get Link Address  ${link}  ${nodes_addr}
\    ${prefix}=  Get Link Prefix  ${link}  ${nodes_addr}
\    Vpp Route Add  ${dut1}  ${net}  ${prefix}  ${dut2_if_addr}  ${dut1_if}

Setup DUT nodes for IPv4 testing


Setup IPv4 adresses on all DUT nodes in topology  ${nodes}  ${nodes_ipv4_addr}
Setup ARP on all DUTs  ${nodes}  ${nodes_ipv4_addr}
Routes are set up for IPv4 testing  ${nodes}  ${nodes_ipv4_addr}
All Vpp Interfaces Ready Wait  ${nodes}

TG interface “${tg_port}” can route to node “${node}” interface “${port}” “${hops}” hops away using IPv4

${tg_port}${node}${port}${hops}
Node "${nodes['TG']}" interface "${tg_port}" can route to node "${node}" interface "${port}" "${hops}" hops away using IPv4

Node “${from_node}” interface “${from_port}” can route to node “${to_node}” interface “${to_port}” ${hops} hops away using IPv4

${from_node}${from_port}${to_node}${to_port}${hops}
${src_ip}=  Get IPv4 address of node "${from_node}" interface "${from_port}" from "${nodes_ipv4_addr}"
${dst_ip}=  Get IPv4 address of node "${to_node}" interface "${to_port}" from "${nodes_ipv4_addr}"
${src_mac}=  Get interface mac  ${from_node}  ${from_port}
${dst_mac}=  Get interface mac  ${to_node}  ${to_port}
${is_dst_tg}=  Is TG node  ${to_node}
${adj_node}  ${adj_int}=  Get adjacent node and interface  ${nodes}  ${from_node}  ${from_port}
${from_port_name}=  Get interface name  ${from_node}  ${from_port}
${to_port_name}=  Get interface name  ${to_node}  ${to_port}
${adj_int_mac}=  Get interface MAC  ${adj_node}  ${adj_int}
${args}=  Traffic Script Gen Arg  ${to_port_name}  ${from_port_name}  ${src_mac}  ${dst_mac}  ${src_ip}  ${dst_ip}
${args}=  Catenate  ${args}  --hops ${hops}  --first_hop_mac ${adj_int_mac}  --is_dst_tg ${is_dst_tg}
Run Traffic Script On Node  ipv4_ping_ttl_check.py  ${from_node}  ${args}

Ipv4 icmp echo sweep

Type of the src_node must be TG and dst_node must be DUT


Append Nodes  ${src_node}  ${dst_node}
Compute Path
${src_port}  ${src_node}=  First Interface
${dst_port}  ${dst_node}=  Last Interface
${src_ip}=  Get IPv4 address of node "${src_node}" interface "${src_port}" from "${nodes_ipv4_addr}"
${dst_ip}=  Get IPv4 address of node "${dst_node}" interface "${dst_port}" from "${nodes_ipv4_addr}"
${src_mac}=  Get Interface Mac  ${src_node}  ${src_port}
${dst_mac}=  Get Interface Mac  ${dst_node}  ${dst_port}
${src_port_name}=  Get interface name  ${src_node}  ${src_port}
${args}=  Traffic Script Gen Arg  ${src_port_name}  ${src_port_name}  ${src_mac}  ${dst_mac}  ${src_ip}  ${dst_ip}
${args}=  Set Variable  ${args} --start_size ${start_size} --end_size ${end_size} --step ${step}
Run Traffic Script On Node  ipv4_sweep_ping.py  ${src_node}  ${args}

Send ARP request and validate response


${link_name}=  Get first active connecting link between node "${tg_node}" and "${vpp_node}"
${src_if}=  Get interface by link name  ${tg_node}  ${link_name}
${dst_if}=  Get interface by link name  ${vpp_node}  ${link_name}
${src_ip}=  Get IPv4 address of node "${tg_node}" interface "${src_if}" from "${nodes_ipv4_addr}"
${dst_ip}=  Get IPv4 address of node "${vpp_node}" interface "${dst_if}" from "${nodes_ipv4_addr}"
${src_mac}=  Get node link mac  ${tg_node}  ${link_name}
${dst_mac}=  Get node link mac  ${vpp_node}  ${link_name}
${src_if_name}=  Get interface name  ${tg_node}  ${src_if}
${args}=  Traffic Script Gen Arg  ${src_if_name}  ${src_if_name}  ${src_mac}  ${dst_mac}  ${src_ip}  ${dst_ip}
Run Traffic Script On Node  arp_request.py  ${tg_node}  ${args}

IP addresses are set on interfaces

Iterates through @{args} list and Set Interface Address for every (${dut_node}, ${interface}, ${address}, ${prefix}) tuple.

Arguments: - ${dut_node} - Node where IP address should be set to. Type: dictionary - ${interface} - Interface name. Type: string - ${address} - IP address. Type: string - ${prefix} - Prefix length. Type: integer

Example:

| IP addresses are set on interfaces | ${dut1_node} | ${dut1_to_dut2} | 192.168.1.1 | 24 | | ... | ${dut1_node} | ${dut1_to_tg} | 192.168.2.1 | 24 |


: FOR  ${dut_node}  ${interface}  ${address}  ${prefix}  IN  @{args}
\    Set Interface Address  ${dut_node}  ${interface}  ${address}  ${prefix}

Node replies to ICMP echo request

Run traffic script that waits for ICMP reply and ignores all other packets.

Arguments: - tg_node - TG node where run traffic script. Type: dictionary - tg_interface - TG interface where send ICMP echo request. Type: string - dst_mac - Destination MAC address. Type: string - src_mac - Source MAC address. Type: string - dst_ip - Destination IP address. Type: string - src_ip - Source IP address. Type: string - timeout - Wait timeout in seconds (Default: 10). Type: integer

Example:

| Node replies to ICMP echo request | ${nodes[‘TG’]} | eth2 | 08:00:27:46:2b:4c | 08:00:27:66:b8:57 | 192.168.23.10 | 192.168.23.1 | 10 |


${tg_interface_name}=  Get interface name  ${tg_node}  ${tg_interface}
${args}=  Catenate  --rx_if  ${tg_interface_name}  --tx_if  ${tg_interface_name}  --dst_mac  ${dst_mac}  --src_mac  ${src_mac}  --dst_ip  ${dst_ip}  --src_ip  ${src_ip}  --timeout  ${timeout}
Run Traffic Script On Node  send_icmp_wait_for_reply.py  ${tg_node}  ${args}

ipv6 module

Ipv6 icmp echo

Type of the src_node must be TG and dst_node must be DUT


Append Nodes  ${tg_node}  ${dut_node}
Compute Path
${src_port}  ${src_node}=  First Interface
${dst_port}  ${dst_node}=  Last Interface
${src_ip}=  Get Node Port Ipv6 Address  ${src_node}  ${src_port}  ${nodes_addr}
${dst_ip}=  Get Node Port Ipv6 Address  ${dst_node}  ${dst_port}  ${nodes_addr}
${src_mac}=  Get Interface Mac  ${src_node}  ${src_port}
${dst_mac}=  Get Interface Mac  ${dst_node}  ${dst_port}
${src_port_name}=  Get interface name  ${src_node}  ${src_port}
${args}=  Traffic Script Gen Arg  ${src_port_name}  ${src_port_name}  ${src_mac}  ${dst_mac}  ${src_ip}  ${dst_ip}
Run Traffic Script On Node  icmpv6_echo.py  ${tg_node}  ${args}
Vpp dump stats  ${dst_node}
${ipv6_counter}=  Vpp get interface ipv6 counter  ${dst_node}  ${dst_port}
Should Be Equal  ${ipv6_counter}  ${2}  #ICMPv6 neighbor advertisement + ICMPv6 echo request

Ipv6 icmp echo sweep

Type of the src_node must be TG and dst_node must be DUT


Append Nodes  ${src_node}  ${dst_node}
Compute Path
${src_port}  ${src_node}=  First Interface
${dst_port}  ${dst_node}=  Last Interface
${src_ip}=  Get Node Port Ipv6 Address  ${src_node}  ${src_port}  ${nodes_addr}
${dst_ip}=  Get Node Port Ipv6 Address  ${dst_node}  ${dst_port}  ${nodes_addr}
${src_mac}=  Get Interface Mac  ${src_node}  ${src_port}
${dst_mac}=  Get Interface Mac  ${dst_node}  ${dst_port}
${src_port_name}=  Get interface name  ${src_node}  ${src_port}
${args}=  Traffic Script Gen Arg  ${src_port_name}  ${src_port_name}  ${src_mac}  ${dst_mac}  ${src_ip}  ${dst_ip}
${args}=  Set Variable  ${args} --start_size ${start_size} --end_size ${end_size} --step ${step}
Run Traffic Script On Node  ipv6_sweep_ping.py  ${src_node}  ${args}

Ipv6 tg to dut1 egress

Send traffic from TG to first DUT egress interface


Append Nodes  ${tg_node}  ${first_dut}  ${second_dut}
Compute Path
${src_port}  ${src_node}=  First Interface
${dst_port}  ${dst_node}=  Last Egress Interface
${hop_port}  ${hop_node}=  First Ingress Interface
${src_ip}=  Get Node Port Ipv6 Address  ${src_node}  ${src_port}  ${nodes_addr}
${dst_ip}=  Get Node Port Ipv6 Address  ${dst_node}  ${dst_port}  ${nodes_addr}
${src_mac}=  Get Interface Mac  ${src_node}  ${src_port}
${dst_mac}=  Get Interface Mac  ${hop_node}  ${hop_port}
${src_port_name}=  Get interface name  ${src_node}  ${src_port}
${args}=  Traffic Script Gen Arg  ${src_port_name}  ${src_port_name}  ${src_mac}  ${dst_mac}  ${src_ip}  ${dst_ip}
Run Traffic Script On Node  icmpv6_echo.py  ${tg_node}  ${args}

Ipv6 tg to dut2 via dut1

Send traffic from TG to second DUT through first DUT


Append Nodes  ${tg_node}  ${first_dut}  ${second_dut}
Compute Path
${src_port}  ${src_node}=  First Interface
${dst_port}  ${dst_node}=  Last Interface
${hop_port}  ${hop_node}=  First Ingress Interface
${src_ip}=  Get Node Port Ipv6 Address  ${src_node}  ${src_port}  ${nodes_addr}
${dst_ip}=  Get Node Port Ipv6 Address  ${dst_node}  ${dst_port}  ${nodes_addr}
${src_mac}=  Get Interface Mac  ${src_node}  ${src_port}
${dst_mac}=  Get Interface Mac  ${hop_node}  ${hop_port}
${src_port_name}=  Get interface name  ${src_node}  ${src_port}
${args}=  Traffic Script Gen Arg  ${src_port_name}  ${src_port_name}  ${src_mac}  ${dst_mac}  ${src_ip}  ${dst_ip}
Run Traffic Script On Node  icmpv6_echo.py  ${tg_node}  ${args}

Ipv6 tg to dut2 egress via dut1

Send traffic from TG to second DUT egress interface through first DUT


Append Nodes  ${tg_node}  ${first_dut}  ${second_dut}  ${tg_node}
Compute Path
${src_port}  ${src_node}=  First Interface
${dst_port}  ${dst_node}=  Last Egress Interface
${hop_port}  ${hop_node}=  First Ingress Interface
${src_ip}=  Get Node Port Ipv6 Address  ${src_node}  ${src_port}  ${nodes_addr}
${dst_ip}=  Get Node Port Ipv6 Address  ${dst_node}  ${dst_port}  ${nodes_addr}
${src_mac}=  Get Interface Mac  ${src_node}  ${src_port}
${dst_mac}=  Get Interface Mac  ${hop_node}  ${hop_port}
${src_port_name}=  Get interface name  ${src_node}  ${src_port}
${args}=  Traffic Script Gen Arg  ${src_port_name}  ${src_port_name}  ${src_mac}  ${dst_mac}  ${src_ip}  ${dst_ip}
Run Traffic Script On Node  icmpv6_echo.py  ${tg_node}  ${args}

Ipv6 tg to tg routed

Send traffic from one TG port to another through DUT nodes and send reply back, also verify hop limit processing


Append Nodes  ${tg_node}  ${first_dut}  ${second_dut}  ${tg_node}
Compute Path
${src_port}  ${src_node}=  First Interface
${dst_port}  ${dst_node}=  Last Interface
${src_nh_port}  ${src_nh_node}=  First Ingress Interface
${dst_nh_port}  ${dst_nh_node}=  Last Egress Interface
${src_ip}=  Get Node Port Ipv6 Address  ${src_node}  ${src_port}  ${nodes_addr}
${dst_ip}=  Get Node Port Ipv6 Address  ${dst_node}  ${dst_port}  ${nodes_addr}
${src_mac}=  Get Interface Mac  ${src_node}  ${src_port}
${dst_mac}=  Get Interface Mac  ${src_node}  ${dst_port}
${src_nh_mac}=  Get Interface Mac  ${src_nh_node}  ${src_nh_port}
${dst_nh_mac}=  Get Interface Mac  ${dst_nh_node}  ${dst_nh_port}
${src_port_name}=  Get interface name  ${src_node}  ${src_port}
${dst_port_name}=  Get interface name  ${dst_node}  ${dst_port}
${args}=  Traffic Script Gen Arg  ${src_port_name}  ${dst_port_name}  ${src_mac}  ${dst_mac}  ${src_ip}  ${dst_ip}
${args}=  Catenate  ${args}  --src_nh_mac ${src_nh_mac}  --dst_nh_mac ${dst_nh_mac}  --h_num 2
Run Traffic Script On Node  icmpv6_echo_req_resp.py  ${tg_node}  ${args}

Ipv6 neighbor solicitation

Send IPv6 neighbor solicitation from TG to DUT


Append Nodes  ${tg_node}  ${dut_node}
Compute Path
${src_port}  ${src_node}=  First Interface
${dst_port}  ${dst_node}=  Last Interface
${src_ip}=  Get Node Port Ipv6 Address  ${src_node}  ${src_port}  ${nodes_addr}
${dst_ip}=  Get Node Port Ipv6 Address  ${dst_node}  ${dst_port}  ${nodes_addr}
${src_mac}=  Get Interface Mac  ${src_node}  ${src_port}
${dst_mac}=  Get Interface Mac  ${dst_node}  ${dst_port}
${src_port_name}=  Get interface name  ${src_node}  ${src_port}
${args}=  Traffic Script Gen Arg  ${src_port_name}  ${src_port_name}  ${src_mac}  ${dst_mac}  ${src_ip}  ${dst_ip}
Run Traffic Script On Node  ipv6_ns.py  ${src_node}  ${args}

Setup ipv6 to all dut in topology

Setup IPv6 address on all DUTs


Setup all DUTs before test
${interfaces}=  Nodes Set Ipv6 Addresses  ${nodes}  ${nodes_addr}
: FOR  ${interface}  IN  @{interfaces}
\    Set Interface State  @{interface}  up  if_type=name
All Vpp Interfaces Ready Wait  ${nodes}

Clear ipv6 on all dut in topology

Remove IPv6 address on all DUTs


Nodes Clear Ipv6 Addresses  ${nodes}  ${nodes_addr}

Vpp nodes setup ipv6 routing

Setup routing on all VPP nodes required for IPv6 tests


Append Nodes  ${nodes['DUT1']}  ${nodes['DUT2']}
Compute Path
${tg}=  Set Variable  ${nodes['TG']}
${dut1_if}  ${dut1}=  First Interface
${dut2_if}  ${dut2}=  Last Interface
${dut1_if_addr}=  Get Node Port Ipv6 Address  ${dut1}  ${dut1_if}  ${nodes_addr}
${dut2_if_addr}=  Get Node Port Ipv6 Address  ${dut2}  ${dut2_if}  ${nodes_addr}
@{tg_dut1_links}=  Get active links connecting "${tg}" and "${dut1}"
@{tg_dut2_links}=  Get active links connecting "${tg}" and "${dut2}"
: FOR  ${link}  IN  @{tg_dut1_links}
\    ${net}=  Get Link Address  ${link}  ${nodes_addr}
\    ${prefix}=  Get Link Prefix  ${link}  ${nodes_addr}
\    Vpp Route Add  ${dut2}  ${net}  ${prefix}  ${dut1_if_addr}  ${dut2_if}
: FOR  ${link}  IN  @{tg_dut2_links}
\    ${net}=  Get Link Address  ${link}  ${nodes_addr}
\    ${prefix}=  Get Link Prefix  ${link}  ${nodes_addr}
\    Vpp Route Add  ${dut1}  ${net}  ${prefix}  ${dut2_if_addr}  ${dut1_if}

l2_traffic module

Send and receive ICMP Packet

Send ICMPv4/ICMPv6 echo request from source interface to destination interface. Packet can be set with Dot1q or Dot1ad tag(s) when required.

Arguments:

  • tg_node - TG node. Type: dictionary
  • src_int - Source interface. Type: string
  • dst_int - Destination interface. Type: string
  • src_ip - Source IP address (Optional). Type: string
  • dst_ip - Destination IP address (Optional). Type: string
  • encaps - Encapsulation: Dot1q or Dot1ad (Optional). Type: string
  • vlan1 - VLAN (outer) tag (Optional). Type: integer
  • vlan2 - VLAN inner tag (Optional). Type: integer

Return:

  • No value returned

Example:

_NOTE:_ Default IP is IPv4

| Send and receive ICMP Packet | ${nodes[‘TG’]} | ${tg_to_dut_if1} | ${tg_to_dut_if2} | | Send and receive ICMP Packet | ${nodes[‘TG’]} | ${tg_to_dut1} | ${tg_to_dut2} | encaps=Dot1q | vlan1=100 | | Send and receive ICMP Packet | ${nodes[‘TG’]} | ${tg_to_dut1} | ${tg_to_dut2} | encaps=Dot1ad | vlan1=110 | vlan2=220 |


${src_mac}=  Get Interface Mac  ${tg_node}  ${src_int}
${dst_mac}=  Get Interface Mac  ${tg_node}  ${dst_int}
${src_int_name}=  Get interface name  ${tg_node}  ${src_int}
${dst_int_name}=  Get interface name  ${tg_node}  ${dst_int}
${args}=  Traffic Script Gen Arg  ${dst_int_name}  ${src_int_name}  ${src_mac}  ${dst_mac}  ${src_ip}  ${dst_ip}
${args1}=  Run Keyword Unless  '${encaps}' == '${EMPTY}'  Catenate  --encaps ${encaps}  --vlan1 ${vlan1}
${args2}=  Run Keyword Unless  '${vlan2}' == '${EMPTY}'  Set Variable  --vlan2 ${vlan2}
${args}=  Run Keyword If  '${args1}' == 'None'  Set Variable  ${args}  ELSE IF  '${args2}' == 'None'  Catenate  ${args}  ${args1}  ELSE  Catenate  ${args}  ${args1}  ${args2}
Run Traffic Script On Node  send_ip_icmp.py  ${tg_node}  ${args}

Send and receive ICMP Packet should fail

Send ICMPv4/ICMPv6 echo request from source interface to destination interface and expect failure with ICMP echo Rx timeout error message.

Arguments:

  • tg_node - TG node. Type: dictionary
  • src_int - Source interface. Type: string
  • dst_int - Destination interface. Type: string
  • src_ip - Source IP address (Optional). Type: string
  • dst_ip - Destination IP address (Optional). Type: string

Return:

  • No value returned

Example:

_NOTE:_ Default IP is IPv4

| Send and receive ICMP Packet should fail| ${nodes[‘TG’]} | ${tg_to_dut_if1} | ${tg_to_dut_if2} |


${src_mac}=  Get Interface Mac  ${tg_node}  ${src_int}
${dst_mac}=  Get Interface Mac  ${tg_node}  ${dst_int}
${src_int_name}=  Get interface name  ${tg_node}  ${src_int}
${dst_int_name}=  Get interface name  ${tg_node}  ${dst_int}
${args}=  Traffic Script Gen Arg  ${dst_int_name}  ${src_int_name}  ${src_mac}  ${dst_mac}  ${src_ip}  ${dst_ip}
Run Keyword And Expect Error  ICMP echo Rx timeout  Run Traffic Script On Node  send_ip_icmp.py  ${tg_node}  ${args}

Send and receive ICMPv4 bidirectionally

Send ICMPv4 echo request from both directions, from interface1 to interface2 and from interface2 to interface1.

Arguments:

  • tg_node - TG node. Type: dictionary
  • src_int - Source interface. Type: string
  • dst_int - Destination interface. Type: string
  • src_ip - Source IP address (Optional). Type: string
  • dst_ip - Destination IP address (Optional). Type: string

Return:

  • No value returned

Example:

| Send and receive ICMPv4 bidirectionally | ${nodes[‘TG’]} | ${tg_to_dut_if1} | ${tg_to_dut_if2} |


Send and receive ICMP Packet  ${tg_node}  ${int1}  ${int2}  ${src_ip}  ${dst_ip}
Send and receive ICMP Packet  ${tg_node}  ${int2}  ${int1}  ${dst_ip}  ${src_ip}

Send and receive ICMPv6 bidirectionally

Send ICMPv6 echo request from both directions, from interface1 to interface2 and from interface2 to interface1.

Arguments:

  • tg_node - TG node. Type: dictionary
  • src_int - Source interface. Type: string
  • dst_int - Destination interface. Type: string
  • src_ip - Source IP address (Optional). Type: string
  • dst_ip - Destination IP address (Optional). Type: string

Return:

  • No value returned

Example:

| Send and receive ICMPv6 bidirectionally | ${nodes[‘TG’]} | ${tg_to_dut_if1} | ${tg_to_dut_if2} |


Send and receive ICMP Packet  ${tg_node}  ${int1}  ${int2}  ${src_ip}  ${dst_ip}
Send and receive ICMP Packet  ${tg_node}  ${int2}  ${int1}  ${dst_ip}  ${src_ip}

l2_xconnect module

L2 setup xconnect on DUT

Setup Bidirectional Cross Connect on DUTs


Set Interface State  ${node}  ${if1}  up
Set Interface State  ${node}  ${if2}  up
Vpp Setup Bidirectional Cross Connect  ${node}  ${if1}  ${if2}

map module

Send IPv4 UDP and check headers for lightweight 4over6

Send empty UDP to given IPv4 dst and UDP port and check received packets headers (Ethernet, IPv6, IPv4, UDP).

Arguments: - tg_node - Node where to run traffic script. Type: string - tx_if - Interface from where to send ICPMv4 packet. Type: string - rx_if - Interface where to receive IPinIP packet. Type: string - tx_dst_mac - Destination MAC address of IPv4 packet. Type: string - tx_dst_ipv4 - Destination IPv4 address. Type: string - tx_src_ipv4 - Source IPv4 address. Type: string - tx_dst_udp_port - Destination UDP port. Type: integer - rx_dst_mac - Expected destination MAC address. Type: string - rx_src_mac - Expected source MAC address. Type: string - dst_ipv6 - Expected destination IPv6 address. Type: string - src_ipv6 - Expected source IPv6 address. Type: string

Return: - No value returned

Example:

| Send IPv4 UDP and check headers for lightweight 4over6 | ${tg_node} | eth3 | eth2 | 08:00:27:66:b8:57 | 20.0.0.1 | 20.0.0.2 | 1232 | 08:00:27:46:2b:4c | 08:00:27:f3:be:f0 | 2001:1::2 | 2001:1::1 |


${tx_name}=  Get interface name  ${tg_node}  ${tx_if}
${rx_name}=  Get interface name  ${tg_node}  ${rx_if}
${args}=  Catenate  --tx_if  ${tx_name}  --rx_if  ${rx_name}  --tx_dst_mac  ${tx_dst_mac}  --tx_src_ipv4  ${tx_src_ipv4}  --tx_dst_ipv4  ${tx_dst_ipv4}  --tx_dst_udp_port  ${tx_dst_udp_port}  --rx_dst_mac  ${rx_dst_mac}  --rx_src_mac  ${rx_src_mac}  --src_ipv6  ${src_ipv6}  --dst_ipv6  ${dst_ipv6}
Run Traffic Script On Node  send_ipv4_udp_check_lw_4o6.py  ${tg_node}  ${args}

Send IPv4 ICMP and check headers for lightweight 4over6

Send ICMP request with ID to given IPv4 destination and check received packets headers (Ethernet, IPv6, IPv4, ICMP).

Arguments: - tg_node - Node where to run traffic script. Type: string - tx_if - Interface from where to send ICPMv4 packet. Type: string - rx_if - Interface where to receive IPinIP packet. Type: string - tx_dst_mac - Destination MAC address of IPv4 packet. Type: string - tx_dst_ipv4 - Destination IPv4 address. Type: string - tx_src_ipv4 - Source IPv4 address. Type: string - tx_icmp_id - ICMP ID. Type: integer - rx_dst_mac - Expected destination MAC address. Type: string - rx_src_mac - Expected source MAC address. Type: string - dst_ipv6 - Expected destination IPv6 address. Type: string - src_ipv6 - Expected source IPv6 address. Type: string

Return: - No value returned

Example:

| Send IPv4 ICMP and check headers for lightweight 4over6 | ${tg_node} | eth3 | eth2 | 08:00:27:66:b8:57 | 20.0.0.1 | 20.0.0.2 | 1232 | 08:00:27:46:2b:4c | 08:00:27:f3:be:f0 | 2001:1::2 | 2001:1::1 |


${tx_name}=  Get interface name  ${tg_node}  ${tx_if}
${rx_name}=  Get interface name  ${tg_node}  ${rx_if}
${args}=  Catenate  --tx_if  ${tx_name}  --rx_if  ${rx_name}  --tx_dst_mac  ${tx_dst_mac}  --tx_src_ipv4  ${tx_src_ipv4}  --tx_dst_ipv4  ${tx_dst_ipv4}  --tx_icmp_id  ${tx_icmp_id}  --rx_dst_mac  ${rx_dst_mac}  --rx_src_mac  ${rx_src_mac}  --src_ipv6  ${src_ipv6}  --dst_ipv6  ${dst_ipv6}
Run Traffic Script On Node  send_ipv4_icmp_check_lw_4o6.py  ${tg_node}  ${args}

Send IPv4 UDP in IPv6 and check headers for lightweight 4over6

Send empty UDP in IPv4 in IPv6 and check if IPv4 packet is correctly decapsulated from IPv6.

Arguments: - tg_node - Node where to run traffic script. Type: string - tx_if - Interface from where to send ICPMv4 packet. Type: string - rx_if - Interface where to receive IPinIP packet. Type: string - tx_dst_mac - Destination MAC address of IPv4 packet. Type: string - tx_src_mac - Source MAC address of IPv4 packet. Type: string - tx_dst_ipv6 - Destination IPv6 address. Type: string - tx_src_ipv6 - Source IPv6 address. Type: string - tx_dst_ipv4 - Destination IPv4 address. Type: string - tx_src_ipv4 - Source IPv4 address. Type: string - tx_src_udp_port - Source UDP port. Type: integer - rx_dst_mac - Expected destination MAC address. Type: string - rx_src_mac - Expected source MAC address. Type: string

Return: - No value returned

Example:

| Send IPv4 UDP in IPv6 and check headers for lightweight 4over6 | ${tg_node} | eth3 | eth2 | 08:00:27:66:b8:57 | 08:00:27:33:54:21 | 2001:1::2 | 2001:1::1 | 20.0.0.1 | 20.0.0.2 | 1232 | 08:00:27:46:2b:4c | 08:00:27:f3:be:f0 |


${tx_name}=  Get interface name  ${tg_node}  ${tx_if}
${rx_name}=  Get interface name  ${tg_node}  ${rx_if}
${args}=  Catenate  --tx_if  ${tx_name}  --rx_if  ${rx_name}  --tx_dst_mac  ${tx_dst_mac}  --tx_src_mac  ${tx_src_mac}  --tx_dst_ipv6  ${tx_dst_ipv6}  --tx_src_ipv6  ${tx_src_ipv6}  --tx_dst_ipv4  ${tx_dst_ipv4}  --tx_src_ipv4  ${tx_src_ipv4}  --tx_src_udp_port  ${tx_src_udp_port}  --rx_dst_mac  ${rx_dst_mac}  --rx_src_mac  ${rx_src_mac}
Run Traffic Script On Node  send_lw_4o6_check_ipv4_udp.py  ${tg_node}  ${args}

Send IPv4 UDP in IPv6 and check headers for lightweight hairpinning

Send empty UDP in IPv4 in IPv6 and check if IPv4 packet is correctly decapsulated and re-encapsulated to another lwB4.

Arguments: - tg_node - Node where to run traffic script. Type: string - tx_if - Interface from where to send ICPMv4 packet. Type: string - rx_if - Interface where to receive IPinIP packet. Type: string - tx_dst_mac - Destination MAC address of send IPv6 packet. Type: string - tx_dst_ipv6 - Destination IPv6 address (lwAFTR). Type: string - tx_src_ipv6 - Source IPv6 address (lwB4_1). Type: string - tx_dst_ipv4 - Destination IPv4 address. Type: string - tx_src_ipv4 - Source IPv4 address. Type: string - tx_dst_udp_port - Destination UDP port (PSID_2 related). Type: integer - tx_src_udp_port - Source UDP port (PSID_1 related). Type: integer - rx_dst_mac - Expected destination MAC address. Type: string - rx_src_mac - Expected source MAC address. Type: string - rx_dst_ipv6 - Expected destination IPv6 address (lwB4_2). Type: string - rx_src_ipv6 - Expected source IPv6 address (lwAFTR). Type: string

Return: - No value returned

Example:

| Send IPv4 UDP in IPv6 and check headers for lightweight hairpinning | ${tg_node} | port3 | port3 | 08:00:27:f3:be:f0 | 2001:1::1 | 2001:1::2 | 20.0.0.1 | 20.0.0.1 | ${6232} | ${1232} | 08:00:27:46:2b:4c | 08:00:27:f3:be:f0 | 2001:1::3 | 2001:1::1 |


${tx_name}=  Get interface name  ${tg_node}  ${tx_if}
${rx_name}=  Get interface name  ${tg_node}  ${rx_if}
${args}=  Catenate  --tx_if  ${tx_name}  --rx_if  ${rx_name}  --tx_dst_mac  ${tx_dst_mac}  --tx_dst_ipv6  ${tx_dst_ipv6}  --tx_src_ipv6  ${tx_src_ipv6}  --tx_dst_ipv4  ${tx_dst_ipv4}  --tx_src_ipv4  ${tx_src_ipv4}  --tx_dst_udp_port  ${tx_dst_udp_port}  --tx_src_udp_port  ${tx_src_udp_port}  --rx_dst_mac  ${rx_dst_mac}  --rx_src_mac  ${rx_src_mac}  --rx_dst_ipv6  ${rx_dst_ipv6}  --rx_src_ipv6  ${rx_src_ipv6}
Run Traffic Script On Node  send_lw_4o6_check_hairpinning_udp.py  ${tg_node}  ${args}

Send IPv4 UDP and check IPv6 headers for MAP-T

Send a UDP in IPv4 and check if IPv4 source and destination addresses are correctly translated into IPv6 addresses.

Arguments: - tg_node - Node where to run traffic script. Type: string - tx_if - Interface from where to send IPv4 UDP packet. Type: string - rx_if - Interface where to receive IPv6 UDP packet. Type: string - tx_dst_mac - Destination MAC address of IPv4 packet. Type: string - tx_dst_ipv4 - Destination IPv4 address. Type: string - tx_src_ipv4 - Source IPv4 address. Type: string - tx_dst_udp_port - Destination UDP port. Type: integer - rx_dst_mac - Expected destination MAC address. Type: string - rx_src_mac - Expected source MAC address. Type: string - dst_ipv6 - Expected destination IPv6 address. Type: string - src_ipv6 - Expected source IPv6 address. Type: string

Return: - No value returned

Example:

| Send IPv4 UDP and check IPv6 headers for MAP-T | ${tg_node} | port3 | port3 | 08:00:27:66:b8:57 | 20.169.201.219 | 100.0.0.1 | ${1232} | 08:00:27:46:2b:4c | 08:00:27:f3:be:f0 | 2001:db8::14a9:c9db:0 | 2001:db8:ffff::6400:1 |


${tx_name}=  Get interface name  ${tg_node}  ${tx_if}
${rx_name}=  Get interface name  ${tg_node}  ${rx_if}
${args}=  Catenate  --tx_if  ${tx_name}  --rx_if  ${rx_name}  --tx_dst_mac  ${tx_dst_mac}  --tx_src_ipv4  ${tx_src_ipv4}  --tx_dst_ipv4  ${tx_dst_ipv4}  --tx_dst_udp_port  ${tx_dst_udp_port}  --rx_dst_mac  ${rx_dst_mac}  --rx_src_mac  ${rx_src_mac}  --rx_src_ipv6  ${src_ipv6}  --rx_dst_ipv6  ${dst_ipv6}
Run Traffic Script On Node  send_ipv4_udp_check_map_t.py  ${tg_node}  ${args}

Send IPv6 UDP and check IPv4 headers for MAP-T

Send a UDP in IPv6 and check if IPv6 source and destination addresses are correctly translated into IPv4 addresses.

Arguments: - tg_node - Node where to run traffic script. Type: string - tx_if - Interface from where to send IPv4 UDP packet. Type: string - rx_if - Interface where to receive IPv6 UDP packet. Type: string - tx_dst_mac - Destination MAC address of IPv4 packet. Type: string - tx_src_mac - Source MAC address of IPv4 packet. Type: string - tx_dst_ipv6 - Destination IPv6 address. Type: string - tx_src_ipv6 - Source IPv6 address. Type: string - tx_src_udp_port - Source UDP port. Type: integer - rx_dst_mac - Expected destination MAC address. Type: string - rx_src_mac - Expected source MAC address. Type: string - dst_ipv4 - Expected destination IPv4 address. Type: string - src_ipv4 - Expected source IPv4 address. Type: string

Return: - No value returned

Example:

| Send IPv6 UDP and check IPv4 headers for MAP-T | port3 | port4 | 08:00:27:f3:be:f0 | 2001:db8:ffff::6400:1 | 2001:db8::14a9:c9db:0 | ${1232} | 08:00:27:58:71:eb | 08:00:27:66:b8:57 | 100.0.0.1 | 20.169.201.219 |


${tx_name}=  Get interface name  ${tg_node}  ${tx_if}
${rx_name}=  Get interface name  ${tg_node}  ${rx_if}
${args}=  Catenate  --tx_if  ${tx_name}  --rx_if  ${rx_name}  --tx_dst_mac  ${tx_dst_mac}  --tx_src_mac  ${tx_src_mac}  --tx_src_ipv6  ${tx_src_ipv6}  --tx_dst_ipv6  ${tx_dst_ipv6}  --tx_src_udp_port  ${tx_src_udp_port}  --rx_dst_mac  ${rx_dst_mac}  --rx_src_mac  ${rx_src_mac}  --rx_src_ipv4  ${src_ipv4}  --rx_dst_ipv4  ${dst_ipv4}
Run Traffic Script On Node  send_ipv6_udp_check_map_t.py  ${tg_node}  ${args}

performance module

Calculate pps

Calculate pps for given rate and L2 frame size, additional 20B are added to L2 frame size as padding.

Arguments - bps - Rate in bps. Type: integer - framesize - L2 frame size in Bytes. Type: integer

Return - Calculated pps. Type: integer

Example:

| Calculate pps | 10000000000 64


${framesize}=  Get Frame Size  ${framesize}
${ret}=  Evaluate  (${bps}/((${framesize}+20)*8)).__trunc__()
Return From Keyword  ${ret}

Get Frame Size

Framesize can be either integer in case of a single packet in stream, or set of packets in case of IMIX type or simmilar. This keyword returns average framesize.

Arguments: - framesize - Framesize. Type: integer or string

Example:

| Get Frame Size | IMIX_v4_1


Run Keyword If  '${framesize}' == 'IMIX_v4_1'  Return From Keyword  353.83333
Return From Keyword  ${framesize}

Setup performance global Variables

Setup suite Variables. Variables are used across performance testing.

_NOTE:_ This KW sets following suite variables: - glob_loss_acceptance - Loss acceptance treshold - glob_loss_acceptance_type - Loss acceptance treshold type - glob_vm_image - Guest VM disk image


Set Suite Variable  ${glob_loss_acceptance}  0.5
Set Suite Variable  ${glob_loss_acceptance_type}  percentage
Set Suite Variable  ${glob_vm_image}  /var/lib/vm/csit-nested-1.5.img

2-node circular Topology Variables Setup

Compute path for testing on two given nodes in circular topology and set corresponding suite variables.

_NOTE:_ This KW sets following suite variables: - tg - TG node - tg_if1 - 1st TG interface towards DUT. - tg_if2 - 2nd TG interface towards DUT. - dut1 - DUT1 node - dut1_if1 - 1st DUT interface towards TG. - dut1_if2 - 2nd DUT interface towards TG.


Append Nodes  ${nodes['TG']}  ${nodes['DUT1']}  ${nodes['TG']}
Compute Path
${tg_if1}  ${tg}=  Next Interface
${dut1_if1}  ${dut1}=  Next Interface
${dut1_if2}  ${dut1}=  Next Interface
${tg_if2}  ${tg}=  Next Interface
Set Suite Variable  ${tg}
Set Suite Variable  ${tg_if1}
Set Suite Variable  ${tg_if2}
Set Suite Variable  ${dut1}
Set Suite Variable  ${dut1_if1}
Set Suite Variable  ${dut1_if2}

3-node circular Topology Variables Setup

Compute path for testing on three given nodes in circular topology and set corresponding suite variables.

_NOTE:_ This KW sets following suite variables: - tg - TG node - tg_if1 - TG interface towards DUT1. - tg_if2 - TG interface towards DUT2. - dut1 - DUT1 node - dut1_if1 - DUT1 interface towards TG. - dut1_if2 - DUT1 interface towards DUT2. - dut2 - DUT2 node - dut2_if1 - DUT2 interface towards TG. - dut2_if2 - DUT2 interface towards DUT1.


Append Nodes  ${nodes['TG']}  ${nodes['DUT1']}  ${nodes['DUT2']}  ${nodes['TG']}
Compute Path
${tg_if1}  ${tg}=  Next Interface
${dut1_if1}  ${dut1}=  Next Interface
${dut1_if2}  ${dut1}=  Next Interface
${dut2_if1}  ${dut2}=  Next Interface
${dut2_if2}  ${dut2}=  Next Interface
${tg_if2}  ${tg}=  Next Interface
Set Suite Variable  ${tg}
Set Suite Variable  ${tg_if1}
Set Suite Variable  ${tg_if2}
Set Suite Variable  ${dut1}
Set Suite Variable  ${dut1_if1}
Set Suite Variable  ${dut1_if2}
Set Suite Variable  ${dut2}
Set Suite Variable  ${dut2_if1}
Set Suite Variable  ${dut2_if2}

2-node circular Topology Variables Setup with DUT interface model

Compute path for testing on two given nodes in circular topology based on interface model provided as an argument and set corresponding suite variables.

Arguments: - iface_model - Interface model. Type: string

_NOTE:_ This KW sets following suite variables: - tg - TG node - tg_if1 - 1st TG interface towards DUT. - tg_if2 - 2nd TG interface towards DUT. - dut1 - DUT1 node - dut1_if1 - 1st DUT interface towards TG. - dut1_if2 - 2nd DUT interface towards TG.

Example:

| 2-node circular Topology Variables Setup with DUT interface model | Intel-X520-DA2 |


${iface_model_list}=  Create list  ${iface_model}
Append Node  ${nodes['TG']}
Append Node  ${nodes['DUT1']}  filter_list=${iface_model_list}
Append Node  ${nodes['TG']}
Compute Path  always_same_link=${FALSE}
${tg_if1}  ${tg}=  Next Interface
${dut1_if1}  ${dut1}=  Next Interface
${dut1_if2}  ${dut1}=  Next Interface
${tg_if2}  ${tg}=  Next Interface
Set Suite Variable  ${tg}
Set Suite Variable  ${tg_if1}
Set Suite Variable  ${tg_if2}
Set Suite Variable  ${dut1}
Set Suite Variable  ${dut1_if1}
Set Suite Variable  ${dut1_if2}

3-node circular Topology Variables Setup with DUT interface model

Compute path for testing on three given nodes in circular topology based on interface model provided as an argument and set corresponding suite variables.

Arguments: - iface_model - Interface model. Type: string

_NOTE:_ This KW sets following suite variables: - tg - TG node - tg_if1 - TG interface towards DUT1. - tg_if2 - TG interface towards DUT2. - dut1 - DUT1 node - dut1_if1 - DUT1 interface towards TG. - dut1_if2 - DUT1 interface towards DUT2. - dut2 - DUT2 node - dut2_if1 - DUT2 interface towards TG. - dut2_if2 - DUT2 interface towards DUT1.

Example:

| 3-node circular Topology Variables Setup with DUT interface model | Intel-X520-DA2 |


${iface_model_list}=  Create list  ${iface_model}
Append Node  ${nodes['TG']}
Append Node  ${nodes['DUT1']}  filter_list=${iface_model_list}
Append Node  ${nodes['DUT2']}  filter_list=${iface_model_list}
Append Node  ${nodes['TG']}
Compute Path
${tg_if1}  ${tg}=  Next Interface
${dut1_if1}  ${dut1}=  Next Interface
${dut1_if2}  ${dut1}=  Next Interface
${dut2_if1}  ${dut2}=  Next Interface
${dut2_if2}  ${dut2}=  Next Interface
${tg_if2}  ${tg}=  Next Interface
Set Suite Variable  ${tg}
Set Suite Variable  ${tg_if1}
Set Suite Variable  ${tg_if2}
Set Suite Variable  ${dut1}
Set Suite Variable  ${dut1_if1}
Set Suite Variable  ${dut1_if2}
Set Suite Variable  ${dut2}
Set Suite Variable  ${dut2_if1}
Set Suite Variable  ${dut2_if2}

VPP interfaces in path are up in a 2-node circular topology

Set UP state on VPP interfaces in path on nodes in 2-node circular topology.


Set Interface State  ${dut1}  ${dut1_if1}  up
Set Interface State  ${dut1}  ${dut1_if2}  up
Vpp Node Interfaces Ready Wait  ${dut1}

VPP interfaces in path are up in a 3-node circular topology

Set UP state on VPP interfaces in path on nodes in 3-node circular topology.


Set Interface State  ${dut1}  ${dut1_if1}  up
Set Interface State  ${dut1}  ${dut1_if2}  up
Set Interface State  ${dut2}  ${dut2_if1}  up
Set Interface State  ${dut2}  ${dut2_if2}  up
Vpp Node Interfaces Ready Wait  ${dut1}
Vpp Node Interfaces Ready Wait  ${dut2}

IPv4 forwarding initialized in a 3-node circular topology

Set UP state on VPP interfaces in path on nodes in 3-node circular topology. Get the interface MAC addresses and setup ARP on all VPP interfaces. Setup IPv4 addresses with /24 prefix on DUT-TG links and /30 prefix on DUT1-DUT2 link. Set routing on both DUT nodes with prefix /24 and next hop of neighbour DUT interface IPv4 address.


Set Interface State  ${dut1}  ${dut1_if1}  up
Set Interface State  ${dut1}  ${dut1_if2}  up
Set Interface State  ${dut2}  ${dut2_if1}  up
Set Interface State  ${dut2}  ${dut2_if2}  up
${tg1_if1_mac}=  Get Interface MAC  ${tg}  ${tg_if1}
${tg1_if2_mac}=  Get Interface MAC  ${tg}  ${tg_if2}
${dut1_if2_mac}=  Get Interface MAC  ${dut1}  ${dut1_if2}
${dut2_if1_mac}=  Get Interface MAC  ${dut2}  ${dut2_if1}
dut1_v4.set_arp  ${dut1_if1}  10.10.10.2  ${tg1_if1_mac}
dut1_v4.set_arp  ${dut1_if2}  1.1.1.2  ${dut2_if1_mac}
dut2_v4.set_arp  ${dut2_if1}  1.1.1.1  ${dut1_if2_mac}
dut2_v4.set_arp  ${dut2_if2}  20.20.20.2  ${tg1_if2_mac}
dut1_v4.set_ip  ${dut1_if1}  10.10.10.1  24
dut1_v4.set_ip  ${dut1_if2}  1.1.1.1  30
dut2_v4.set_ip  ${dut2_if1}  1.1.1.2  30
dut2_v4.set_ip  ${dut2_if2}  20.20.20.1  24
dut1_v4.set_route  20.20.20.0  24  1.1.1.2  ${dut1_if2}
dut2_v4.set_route  10.10.10.0  24  1.1.1.1  ${dut2_if1}
All Vpp Interfaces Ready Wait  ${nodes}

Scale IPv4 forwarding initialized in a 3-node circular topology

Custom setup of IPv4 topology with scalability of ip routes on all DUT nodes in 3-node circular topology

Arguments: - ${count} - IP route count. Type: integer

Return: - No value returned

Example:

| Scale IPv4 forwarding initialized in a 3-node circular topology | 100000 |


Set Interface State  ${dut1}  ${dut1_if1}  up
Set Interface State  ${dut1}  ${dut1_if2}  up
Set Interface State  ${dut2}  ${dut2_if1}  up
Set Interface State  ${dut2}  ${dut2_if2}  up
${tg1_if1_mac}=  Get Interface MAC  ${tg}  ${tg_if1}
${tg1_if2_mac}=  Get Interface MAC  ${tg}  ${tg_if2}
${dut1_if2_mac}=  Get Interface MAC  ${dut1}  ${dut1_if2}
${dut2_if1_mac}=  Get Interface MAC  ${dut2}  ${dut2_if1}
Add arp on dut  ${dut1}  ${dut1_if1}  1.1.1.1  ${tg1_if1_mac}
Add arp on dut  ${dut1}  ${dut1_if2}  2.2.2.2  ${dut2_if1_mac}
Add arp on dut  ${dut2}  ${dut2_if1}  2.2.2.1  ${dut1_if2_mac}
Add arp on dut  ${dut2}  ${dut2_if2}  3.3.3.1  ${tg1_if2_mac}
IP addresses are set on interfaces  ${dut1}  ${dut1_if1}  1.1.1.2  30
IP addresses are set on interfaces  ${dut1}  ${dut1_if2}  2.2.2.1  30
IP addresses are set on interfaces  ${dut2}  ${dut2_if1}  2.2.2.2  30
IP addresses are set on interfaces  ${dut2}  ${dut2_if2}  3.3.3.2  30
Vpp Route Add  ${dut1}  10.0.0.0  32  1.1.1.1  ${dut1_if1}  count=${count}
Vpp Route Add  ${dut1}  20.0.0.0  32  2.2.2.2  ${dut1_if2}  count=${count}
Vpp Route Add  ${dut2}  10.0.0.0  32  2.2.2.1  ${dut2_if1}  count=${count}
Vpp Route Add  ${dut2}  20.0.0.0  32  3.3.3.1  ${dut2_if2}  count=${count}
All Vpp Interfaces Ready Wait  ${nodes}

IPv4 forwarding with vhost initialized in a 3-node circular topology

Create vhost-user interfaces in VPP. Set UP state of all VPP interfaces in path on nodes in 3-node circular topology. Create 2 FIB tables on each DUT with multipath routing. Assign pair of Physical and Virtual interfaces on both nodes to each FIB table. Setup IPv4 addresses with /30 prefix on DUT-TG links and /30 prefix on DUT1-DUT2 link. Set routing on all DUT nodes in all FIB tables with prefix /24 and next hop of neighbour IPv4 address. Setup ARP on all VPP interfaces.

Arguments: - sock1 - Sock path for first Vhost-User interface. Type: string - sock2 - Sock path for second Vhost-User interface. Type: string

Return: - No value returned

Example:

| IPv4 forwarding with vhost initialized in a 3-node circular topology | /tmp/sock1 | /tmp/sock2 |


VPP interfaces in path are up in a 3-node circular topology
VPP Vhost interfaces for L2BD forwarding are setup  ${dut1}  ${sock1}  ${sock2}
${dut1_vif1}=  Set Variable  ${vhost_if1}
${dut1_vif2}=  Set Variable  ${vhost_if2}
Set Interface State  ${dut1}  ${dut1_vif1}  up
Set Interface State  ${dut1}  ${dut1_vif2}  up
VPP Vhost interfaces for L2BD forwarding are setup  ${dut2}  ${sock1}  ${sock2}
${dut2_vif1}=  Set Variable  ${vhost_if1}
${dut2_vif2}=  Set Variable  ${vhost_if2}
Set Interface State  ${dut2}  ${dut2_vif1}  up
Set Interface State  ${dut2}  ${dut2_vif2}  up
${dut1_vif1_idx}=  Get Interface SW Index  ${dut1}  ${dut1_vif1}
${dut1_vif2_idx}=  Get Interface SW Index  ${dut1}  ${dut1_vif2}
${dut1_if1_idx}=  Get Interface SW Index  ${dut1}  ${dut1_if1}
${dut1_if2_idx}=  Get Interface SW Index  ${dut1}  ${dut1_if2}
${dut2_vif1_idx}=  Get Interface SW Index  ${dut2}  ${dut2_vif1}
${dut2_vif2_idx}=  Get Interface SW Index  ${dut2}  ${dut2_vif2}
${dut2_if1_idx}=  Get Interface SW Index  ${dut2}  ${dut2_if1}
${dut2_if2_idx}=  Get Interface SW Index  ${dut2}  ${dut2_if2}
Add fib table  ${dut1}  20.20.20.0  24  ${fib_table_1}  via 4.4.4.2 sw_if_index ${dut1_vif1_idx} multipath
Add fib table  ${dut1}  10.10.10.0  24  ${fib_table_1}  via 1.1.1.2 sw_if_index ${dut1_if1_idx} multipath
Add fib table  ${dut1}  20.20.20.0  24  ${fib_table_2}  via 2.2.2.2 sw_if_index ${dut1_if2_idx} multipath
Add fib table  ${dut1}  10.10.10.0  24  ${fib_table_2}  via 5.5.5.2 sw_if_index ${dut1_vif2_idx} multipath
Add fib table  ${dut2}  10.10.10.0  24  ${fib_table_1}  via 2.2.2.1 sw_if_index ${dut2_if1_idx} multipath
Add fib table  ${dut2}  20.20.20.0  24  ${fib_table_1}  via 4.4.4.1 sw_if_index ${dut2_vif1_idx} multipath
Add fib table  ${dut2}  10.10.10.0  24  ${fib_table_2}  via 5.5.5.2 sw_if_index ${dut2_vif2_idx} multipath
Add fib table  ${dut2}  20.20.20.0  24  ${fib_table_2}  via 3.3.3.2 sw_if_index ${dut2_if2_idx} multipath
Assign Interface To Fib Table  ${dut1}  ${dut1_if1}  ${fib_table_1}
Assign Interface To Fib Table  ${dut1}  ${dut1_vif1}  ${fib_table_1}
Assign Interface To Fib Table  ${dut1}  ${dut1_if2}  ${fib_table_2}
Assign Interface To Fib Table  ${dut1}  ${dut1_vif2}  ${fib_table_2}
Assign Interface To Fib Table  ${dut2}  ${dut2_if1}  ${fib_table_1}
Assign Interface To Fib Table  ${dut2}  ${dut2_vif1}  ${fib_table_1}
Assign Interface To Fib Table  ${dut2}  ${dut2_if2}  ${fib_table_2}
Assign Interface To Fib Table  ${dut2}  ${dut2_vif2}  ${fib_table_2}
IP addresses are set on interfaces  ${dut1}  ${dut1_if1}  1.1.1.2  30
IP addresses are set on interfaces  ${dut1}  ${dut1_if2}  2.2.2.1  30
IP addresses are set on interfaces  ${dut1}  ${dut1_vif1}  4.4.4.1  30
IP addresses are set on interfaces  ${dut1}  ${dut1_vif2}  5.5.5.1  30
IP addresses are set on interfaces  ${dut2}  ${dut2_if1}  2.2.2.2  30
IP addresses are set on interfaces  ${dut2}  ${dut2_if2}  3.3.3.1  30
IP addresses are set on interfaces  ${dut2}  ${dut2_vif1}  4.4.4.1  30
IP addresses are set on interfaces  ${dut2}  ${dut2_vif2}  5.5.5.1  30
${tg1_if1_mac}=  Get Interface MAC  ${tg}  ${tg_if1}
${dut1_if2_mac}=  Get Interface MAC  ${dut1}  ${dut1_if2}
${tg1_if2_mac}=  Get Interface MAC  ${tg}  ${tg_if2}
${dut2_if1_mac}=  Get Interface MAC  ${dut2}  ${dut2_if1}
${dut1_vif1_mac}=  Get Vhost User Mac By Sw Index  ${dut1}  ${dut1_vif1_idx}
${dut1_vif2_mac}=  Get Vhost User Mac By Sw Index  ${dut1}  ${dut1_vif2_idx}
${dut2_vif1_mac}=  Get Vhost User Mac By Sw Index  ${dut2}  ${dut2_vif1_idx}
${dut2_vif2_mac}=  Get Vhost User Mac By Sw Index  ${dut2}  ${dut2_vif2_idx}
Set Test Variable  ${dut1_vif1_mac}
Set Test Variable  ${dut1_vif2_mac}
Set Test Variable  ${dut2_vif1_mac}
Set Test Variable  ${dut2_vif2_mac}
Add arp on dut  ${dut1}  ${dut1_if1}  1.1.1.1  ${tg1_if1_mac}  vrf=${fib_table_1}
Add arp on dut  ${dut1}  ${dut1_if2}  2.2.2.2  ${dut2_if1_mac}  vrf=${fib_table_2}
Add arp on dut  ${dut1}  ${dut1_vif1}  4.4.4.2  52:54:00:00:04:01  vrf=${fib_table_1}
Add arp on dut  ${dut1}  ${dut1_vif2}  5.5.5.2  52:54:00:00:04:02  vrf=${fib_table_2}
Add arp on dut  ${dut2}  ${dut2_if1}  2.2.2.1  ${dut1_if2_mac}  vrf=${fib_table_1}
Add arp on dut  ${dut2}  ${dut2_if2}  3.3.3.2  ${tg1_if2_mac}  vrf=${fib_table_2}
Add arp on dut  ${dut2}  ${dut2_vif1}  4.4.4.2  52:54:00:00:04:01  vrf=${fib_table_1}
Add arp on dut  ${dut2}  ${dut2_vif2}  5.5.5.2  52:54:00:00:04:02  vrf=${fib_table_2}
Vpp Route Add  ${dut1}  20.20.20.0  24  4.4.4.2  ${dut1_vif1}  vrf=${fib_table_1}
Vpp Route Add  ${dut1}  10.10.10.0  24  1.1.1.1  ${dut1_if1}  vrf=${fib_table_1}
Vpp Route Add  ${dut1}  20.20.20.0  24  2.2.2.2  ${dut1_if2}  vrf=${fib_table_2}
Vpp Route Add  ${dut1}  10.10.10.0  24  5.5.5.2  ${dut1_vif2}  vrf=${fib_table_2}
Vpp Route Add  ${dut2}  20.20.20.0  24  4.4.4.2  ${dut2_vif1}  vrf=${fib_table_1}
Vpp Route Add  ${dut2}  10.10.10.0  24  2.2.2.1  ${dut2_if1}  vrf=${fib_table_1}
Vpp Route Add  ${dut2}  20.20.20.0  24  3.3.3.2  ${dut2_if2}  vrf=${fib_table_2}
Vpp Route Add  ${dut2}  10.10.10.0  24  5.5.5.2  ${dut2_vif2}  vrf=${fib_table_2}

IPv4 policer 2r3c-${t} initialized in a 3-node circular topology

Setup of 2r3c color-aware or color-blind policer with dst ip match on all DUT nodes in 3-node circular topology. Policer is applied on links TG - DUT1 and DUT2 - TG.

${t}
${dscp}=  DSCP AF22
Policer Set Name  policer1
Policer Set CIR  ${cir}
Policer Set EIR  ${eir}
Policer Set CB  ${cb}
Policer Set EB  ${eb}
Policer Set Rate Type pps
Policer Set Round Type Closest
Policer Set Type 2R3C 2698
Policer Set Conform Action Transmit
Policer Set Exceed Action Mark and Transmit  ${dscp}
Policer Set Violate Action Transmit
Policer Enable Color Aware
Run Keyword If  ${t} == 'ca'  Policer Enable Color Aware
Policer Classify Set Precolor Exceed
Policer Set Node  ${dut1}
Policer Classify Set Interface  ${dut1_if1}
Policer Classify Set Match IP  20.20.20.2  ${False}
Policer Set Configuration
Policer Set Node  ${dut2}
Policer Classify Set Interface  ${dut2_if2}
Policer Classify Set Match IP  10.10.10.2  ${False}
Policer Set Configuration

IPv6 forwarding initialized in a 3-node circular topology

Set UP state on VPP interfaces in path on nodes in 3-node circular topology. Get the interface MAC addresses and setup neighbour on all VPP interfaces. Setup IPv6 addresses with /128 prefixes on all interfaces. Set routing on both DUT nodes with prefix /64 and next hop of neighbour DUT interface IPv6 address.


${prefix}=  Set Variable  64
${tg1_if1_mac}=  Get Interface MAC  ${tg}  ${tg_if1}
${tg1_if2_mac}=  Get Interface MAC  ${tg}  ${tg_if2}
${dut1_if2_mac}=  Get Interface MAC  ${dut1}  ${dut1_if2}
${dut2_if1_mac}=  Get Interface MAC  ${dut2}  ${dut2_if1}
VPP Set If IPv6 Addr  ${dut1}  ${dut1_if1}  2001:1::1  ${prefix}
VPP Set If IPv6 Addr  ${dut1}  ${dut1_if2}  2001:3::1  ${prefix}
VPP Set If IPv6 Addr  ${dut2}  ${dut2_if1}  2001:3::2  ${prefix}
VPP Set If IPv6 Addr  ${dut2}  ${dut2_if2}  2001:2::1  ${prefix}
Vpp nodes ra suppress link layer  ${nodes}
Add Ip Neighbor  ${dut1}  ${dut1_if1}  2001:1::2  ${tg1_if1_mac}
Add Ip Neighbor  ${dut2}  ${dut2_if2}  2001:2::2  ${tg1_if2_mac}
Add Ip Neighbor  ${dut1}  ${dut1_if2}  2001:3::2  ${dut2_if1_mac}
Add Ip Neighbor  ${dut2}  ${dut2_if1}  2001:3::1  ${dut1_if2_mac}
Vpp Route Add  ${dut1}  2001:2::0  ${prefix}  2001:3::2  ${dut1_if2}
Vpp Route Add  ${dut2}  2001:1::0  ${prefix}  2001:3::1  ${dut2_if1}

Scale IPv6 forwarding initialized in a 3-node circular topology

Custom setup of IPv6 topology with scalability of ip routes on all DUT nodes in 3-node circular topology

Arguments: - ${count} - IP route count. Type: integer

Return: - No value returned

Example:

| Scale IPv6 forwarding initialized in a 3-node circular topology | 100000 |


${subn_prefix}=  Set Variable  64
${host_prefix}=  Set Variable  128
VPP Set If IPv6 Addr  ${dut1}  ${dut1_if1}  2001:3::1  ${subn_prefix}
VPP Set If IPv6 Addr  ${dut1}  ${dut1_if2}  2001:4::1  ${subn_prefix}
VPP Set If IPv6 Addr  ${dut2}  ${dut2_if1}  2001:4::2  ${subn_prefix}
VPP Set If IPv6 Addr  ${dut2}  ${dut2_if2}  2001:5::1  ${subn_prefix}
${tg1_if1_mac}=  Get Interface MAC  ${tg}  ${tg_if1}
${tg1_if2_mac}=  Get Interface MAC  ${tg}  ${tg_if2}
${dut1_if2_mac}=  Get Interface MAC  ${dut1}  ${dut1_if2}
${dut2_if1_mac}=  Get Interface MAC  ${dut2}  ${dut2_if1}
Vpp nodes ra suppress link layer  ${nodes}
Add Ip Neighbor  ${dut1}  ${dut1_if1}  2001:3::2  ${tg1_if1_mac}
Add Ip Neighbor  ${dut1}  ${dut1_if2}  2001:4::2  ${dut2_if1_mac}
Add Ip Neighbor  ${dut2}  ${dut2_if1}  2001:4::1  ${dut1_if2_mac}
Add Ip Neighbor  ${dut2}  ${dut2_if2}  2001:5::2  ${tg1_if2_mac}
Vpp Route Add  ${dut1}  2001:2::0  ${host_prefix}  2001:4::2  interface=${dut1_if2}  count=${count}
Vpp Route Add  ${dut1}  2001:1::0  ${host_prefix}  2001:3::2  interface=${dut1_if1}  count=${count}
Vpp Route Add  ${dut2}  2001:1::0  ${host_prefix}  2001:4::1  interface=${dut2_if1}  count=${count}
Vpp Route Add  ${dut2}  2001:2::0  ${host_prefix}  2001:5::2  interface=${dut2_if2}  count=${count}

IPv6 iAcl whitelist initialized in a 3-node circular topology

Creates classify L3 table on DUTs. IPv6 iAcl security whitelist ingress /64 filter entries applied on links TG - DUT1 and DUT2 - TG.


${table_idx}  ${skip_n}  ${match_n}=  And Vpp Creates Classify Table L3  ${dut1}  ip6  dst
  And Vpp Configures Classify Session L3  ${dut1}  permit  ${table_idx}  ${skip_n}  ${match_n}  ip6  dst  2001:2::2
  And Vpp Enable Input Acl Interface  ${dut1}  ${dut1_if1}  ip6  ${table_idx}
${table_idx}  ${skip_n}  ${match_n}=  And Vpp Creates Classify Table L3  ${dut2}  ip6  dst
  And Vpp Configures Classify Session L3  ${dut2}  permit  ${table_idx}  ${skip_n}  ${match_n}  ip6  dst  2001:1::2
  And Vpp Enable Input Acl Interface  ${dut2}  ${dut2_if2}  ip6  ${table_idx}

L2 xconnect initialized in a 3-node circular topology

Setup L2 xconnect topology by cross connecting two interfaces on each DUT. Interfaces are brought up.


L2 setup xconnect on DUT  ${dut1}  ${dut1_if1}  ${dut1_if2}
L2 setup xconnect on DUT  ${dut2}  ${dut2_if1}  ${dut2_if2}
All Vpp Interfaces Ready Wait  ${nodes}

L2 xconnect with VXLANoIPv4 initialized in a 3-node circular topology

Setup L2 xconnect topology with VXLANoIPv4 by cross connecting physical and vxlan interfaces on each DUT. All interfaces are brought up. IPv4 addresses with prefix /24 are configured on interfaces between DUTs. VXLAN sub-interfaces has same IPv4 address as interfaces.


VPP interfaces in path are up in a 3-node circular topology
IP addresses are set on interfaces  ${dut1}  ${dut1_if2}  172.16.0.1  24
IP addresses are set on interfaces  ${dut2}  ${dut2_if1}  172.16.0.2  24
${dut1_if2_mac}=  Get Interface MAC  ${dut1}  ${dut1_if2}
${dut2_if1_mac}=  Get Interface MAC  ${dut2}  ${dut2_if1}
Add arp on dut  ${dut1}  ${dut1_if2}  172.16.0.2  ${dut2_if1_mac}
Add arp on dut  ${dut2}  ${dut2_if1}  172.16.0.1  ${dut1_if2_mac}
${dut1s_vxlan}=  Create VXLAN interface  ${dut1}  24  172.16.0.1  172.16.0.2
L2 setup xconnect on DUT  ${dut1}  ${dut1_if1}  ${dut1s_vxlan}
${dut2s_vxlan}=  Create VXLAN interface  ${dut2}  24  172.16.0.2  172.16.0.1
L2 setup xconnect on DUT  ${dut2}  ${dut2_if2}  ${dut2s_vxlan}

L2 xconnect with Vhost-User initialized in a 3-node circular topology

Create two Vhost-User interfaces on all defined VPP nodes. Cross connect each Vhost interface with one physical interface.

Arguments: - sock1 - Socket path for first Vhost-User interface. Type: string - sock2 - Socket path for second Vhost-User interface. Type: string

Example:

| L2 xconnect with Vhost-User initialized in a 3-node circular topology | /tmp/sock1 | /tmp/sock2 |


VPP Vhost interfaces for L2BD forwarding are setup  ${dut1}  ${sock1}  ${sock2}
L2 setup xconnect on DUT  ${dut1}  ${dut1_if1}  ${vhost_if1}
L2 setup xconnect on DUT  ${dut1}  ${dut1_if2}  ${vhost_if2}
VPP Vhost interfaces for L2BD forwarding are setup  ${dut2}  ${sock1}  ${sock2}
L2 setup xconnect on DUT  ${dut2}  ${dut2_if1}  ${vhost_if1}
L2 setup xconnect on DUT  ${dut2}  ${dut2_if2}  ${vhost_if2}
All Vpp Interfaces Ready Wait  ${nodes}

L2 xconnect with Vhost-User and VLAN initialized in a 3-node circular topology

Create two Vhost-User interfaces on all defined VPP nodes. Cross connect each Vhost interface with one physical interface. Setup VLAN between DUTs. All interfaces are brought up.

Arguments: - sock1 - Socket path for first Vhost-User interface. Type: string - sock2 - Socket path for second Vhost-User interface. Type: string - subid - ID of the sub-interface to be created. Type: string - tag_rewrite - Method of tag rewrite. Type: string

Example:

| L2 xconnect with Vhost-User and VLAN initialized in a 3-nodecircular topology | /tmp/sock1 | /tmp/sock2 | 10 | pop-1 |


VPP interfaces in path are up in a 3-node circular topology
VLAN dot1q subinterfaces initialized on 3-node topology  ${dut1}  ${dut1_if2}  ${dut2}  ${dut2_if1}  ${subid}
L2 tag rewrite method setup on interfaces  ${dut1}  ${subif_index_1}  ${dut2}  ${subif_index_2}  ${tag_rewrite}
VPP Vhost interfaces for L2BD forwarding are setup  ${dut1}  ${sock1}  ${sock2}
L2 setup xconnect on DUT  ${dut1}  ${dut1_if1}  ${vhost_if1}
L2 setup xconnect on DUT  ${dut1}  ${subif_index_1}  ${vhost_if2}
VPP Vhost interfaces for L2BD forwarding are setup  ${dut2}  ${sock1}  ${sock2}
L2 setup xconnect on DUT  ${dut2}  ${subif_index_2}  ${vhost_if1}
L2 setup xconnect on DUT  ${dut2}  ${dut2_if2}  ${vhost_if2}
All Vpp Interfaces Ready Wait  ${nodes}

L2 bridge domain initialized in a 3-node circular topology

Setup L2 DB topology by adding two interfaces on each DUT into BD that is created automatically with index 1. Learning is enabled. Interfaces are brought up.


Vpp l2bd forwarding setup  ${dut1}  ${dut1_if1}  ${dut1_if2}
Vpp l2bd forwarding setup  ${dut2}  ${dut2_if1}  ${dut2_if2}
All Vpp Interfaces Ready Wait  ${nodes}

L2 bridge domains with Vhost-User initialized in a 3-node circular topology

Create two Vhost-User interfaces on all defined VPP nodes. Add each Vhost-User interface into L2 bridge domains with learning enabled with physical inteface.

Arguments: - bd_id1 - Bridge domain ID. Type: integer - bd_id2 - Bridge domain ID. Type: integer - sock1 - Sock path for first Vhost-User interface. Type: string - sock2 - Sock path for second Vhost-User interface. Type: string

Example:

| L2 bridge domains with Vhost-User initialized in a 3-node circular topology | 1 | 2 | /tmp/sock1 | /tmp/sock2 |


VPP Vhost interfaces for L2BD forwarding are setup  ${dut1}  ${sock1}  ${sock2}
Interface is added to bridge domain  ${dut1}  ${dut1_if1}  ${bd_id1}
Interface is added to bridge domain  ${dut1}  ${vhost_if1}  ${bd_id1}
Interface is added to bridge domain  ${dut1}  ${dut1_if2}  ${bd_id2}
Interface is added to bridge domain  ${dut1}  ${vhost_if2}  ${bd_id2}
VPP Vhost interfaces for L2BD forwarding are setup  ${dut2}  ${sock1}  ${sock2}
Interface is added to bridge domain  ${dut2}  ${dut2_if1}  ${bd_id1}
Interface is added to bridge domain  ${dut2}  ${vhost_if1}  ${bd_id1}
Interface is added to bridge domain  ${dut2}  ${dut2_if2}  ${bd_id2}
Interface is added to bridge domain  ${dut2}  ${vhost_if2}  ${bd_id2}
All Vpp Interfaces Ready Wait  ${nodes}

L2 bridge domain with VXLANoIPv4 initialized in a 3-node circular topology

Setup L2 bridge domain topology with VXLANoIPv4 by connecting physical and vxlan interfaces on each DUT. All interfaces are brought up. IPv4 addresses with prefix /24 are configured on interfaces between DUTs. VXLAN sub-interfaces has same IPv4 address as interfaces.


VPP interfaces in path are up in a 3-node circular topology
IP addresses are set on interfaces  ${dut1}  ${dut1_if2}  172.16.0.1  24
IP addresses are set on interfaces  ${dut2}  ${dut2_if1}  172.16.0.2  24
${dut1_if2_mac}=  Get Interface MAC  ${dut1}  ${dut1_if2}
${dut2_if1_mac}=  Get Interface MAC  ${dut2}  ${dut2_if1}
Add arp on dut  ${dut1}  ${dut1_if2}  172.16.0.2  ${dut2_if1_mac}
Add arp on dut  ${dut2}  ${dut2_if1}  172.16.0.1  ${dut1_if2_mac}
${dut1s_vxlan}=  Create VXLAN interface  ${dut1}  24  172.16.0.1  172.16.0.2
${dut2s_vxlan}=  Create VXLAN interface  ${dut2}  24  172.16.0.2  172.16.0.1
Vpp l2bd forwarding setup  ${dut1}  ${dut1_if1}  ${dut1s_vxlan}
Vpp l2bd forwarding setup  ${dut2}  ${dut2_if2}  ${dut2s_vxlan}
All Vpp Interfaces Ready Wait  ${nodes}

L2 bridge domains with Vhost-User and VXLANoIPv4 initialized in a 3-node circular topology

Create two Vhost-User interfaces on all defined VPP nodes. Add each Vhost-User interface into L2 bridge domains with learning enabled with physical inteface. Setup VXLANoIPv4 between DUTs by connecting physical and vxlan interfaces on each DUT. All interfaces are brought up. IPv4 addresses with prefix /24 are configured on interfaces between DUTs. VXLAN sub-interfaces has same IPv4 address as interfaces.

Arguments: - bd_id1 - Bridge domain ID. Type: integer - bd_id2 - Bridge domain ID. Type: integer - sock1 - Sock path for first Vhost-User interface. Type: string - sock2 - Sock path for second Vhost-User interface. Type: string

Example:

| L2 bridge domains with Vhost-User and VXLANoIPv4 initialized in a3-node circular topology | 1 | 2 | /tmp/sock1 | /tmp/sock2 |


VPP interfaces in path are up in a 3-node circular topology
IP addresses are set on interfaces  ${dut1}  ${dut1_if2}  172.16.0.1  24
IP addresses are set on interfaces  ${dut2}  ${dut2_if1}  172.16.0.2  24
${dut1s_vxlan}=  Create VXLAN interface  ${dut1}  24  172.16.0.1  172.16.0.2
${dut2s_vxlan}=  Create VXLAN interface  ${dut2}  24  172.16.0.2  172.16.0.1
VPP Vhost interfaces for L2BD forwarding are setup  ${dut1}  ${sock1}  ${sock2}
Interface is added to bridge domain  ${dut1}  ${dut1_if1}  ${bd_id1}
Interface is added to bridge domain  ${dut1}  ${vhost_if1}  ${bd_id1}
Interface is added to bridge domain  ${dut1}  ${vhost_if2}  ${bd_id2}
Interface is added to bridge domain  ${dut1}  ${dut1s_vxlan}  ${bd_id2}
VPP Vhost interfaces for L2BD forwarding are setup  ${dut2}  ${sock1}  ${sock2}
Interface is added to bridge domain  ${dut2}  ${dut2s_vxlan}  ${bd_id1}
Interface is added to bridge domain  ${dut2}  ${vhost_if1}  ${bd_id1}
Interface is added to bridge domain  ${dut2}  ${vhost_if2}  ${bd_id2}
Interface is added to bridge domain  ${dut2}  ${dut2_if2}  ${bd_id2}
All Vpp Interfaces Ready Wait  ${nodes}

L2 bridge domains with Vhost-User and VLAN initialized in a 3-node circular topology

Create two Vhost-User interfaces on all defined VPP nodes. Add each Vhost-User interface into L2 bridge domains with learning enabled with physical inteface. Setup VLAN between DUTs. All interfaces are brought up.

Arguments: - bd_id1 - Bridge domain ID. Type: integer - bd_id2 - Bridge domain ID. Type: integer - sock1 - Sock path for first Vhost-User interface. Type: string - sock2 - Sock path for second Vhost-User interface. Type: string - subid - ID of the sub-interface to be created. Type: string - tag_rewrite - Method of tag rewrite. Type: string

Example:

| L2 bridge domains with Vhost-User and VLAN initialized in a 3-nodecircular topology | 1 | 2 | /tmp/sock1 | /tmp/sock2 | 10pop-1 |


VPP interfaces in path are up in a 3-node circular topology
VLAN dot1q subinterfaces initialized on 3-node topology  ${dut1}  ${dut1_if2}  ${dut2}  ${dut2_if1}  ${subid}
L2 tag rewrite method setup on interfaces  ${dut1}  ${subif_index_1}  ${dut2}  ${subif_index_2}  ${tag_rewrite}
VPP Vhost interfaces for L2BD forwarding are setup  ${dut1}  ${sock1}  ${sock2}
Interface is added to bridge domain  ${dut1}  ${dut1_if1}  ${bd_id1}
Interface is added to bridge domain  ${dut1}  ${vhost_if1}  ${bd_id1}
Interface is added to bridge domain  ${dut1}  ${vhost_if2}  ${bd_id2}
Interface is added to bridge domain  ${dut1}  ${subif_index_1}  ${bd_id2}
VPP Vhost interfaces for L2BD forwarding are setup  ${dut2}  ${sock1}  ${sock2}
Interface is added to bridge domain  ${dut2}  ${subif_index_2}  ${bd_id1}
Interface is added to bridge domain  ${dut2}  ${vhost_if1}  ${bd_id1}
Interface is added to bridge domain  ${dut2}  ${vhost_if2}  ${bd_id2}
Interface is added to bridge domain  ${dut2}  ${dut2_if2}  ${bd_id2}
All Vpp Interfaces Ready Wait  ${nodes}

2-node Performance Suite Setup with DUT’s NIC model

Suite preparation phase that setup default startup configuration of VPP on all DUTs. Updates interfaces on all nodes and setup global variables used in test cases based on interface model provided as an argument. Initializes traffic generator.

Arguments: - topology_type - Topology type. Type: string - nic_model - Interface model. Type: string

Example:

| 2-node Performance Suite Setup | L2 | Intel-X520-DA2 |


Show vpp version on all DUTs
Setup performance global Variables
2-node circular Topology Variables Setup with DUT interface model  ${nic_model}
Setup default startup configuration of VPP on all DUTs
Initialize traffic generator  ${tg}  ${tg_if1}  ${tg_if2}  ${dut1}  ${dut1_if1}  ${dut1}  ${dut1_if2}  ${topology_type}

3-node Performance Suite Setup with DUT’s NIC model

Suite preparation phase that setup default startup configuration of VPP on all DUTs. Updates interfaces on all nodes and setup global variables used in test cases based on interface model provided as an argument. Initializes traffic generator.

Arguments: - topology_type - Topology type. Type: string - nic_model - Interface model. Type: string

Example:

| 3-node Performance Suite Setup | L2 | Intel-X520-DA2 |


Show vpp version on all DUTs
Setup performance global Variables
3-node circular Topology Variables Setup with DUT interface model  ${nic_model}
Setup default startup configuration of VPP on all DUTs
Initialize traffic generator  ${tg}  ${tg_if1}  ${tg_if2}  ${dut1}  ${dut1_if1}  ${dut2}  ${dut2_if2}  ${topology_type}

3-node Performance Suite Teardown

Suite teardown phase with traffic generator teardown.


Teardown traffic generator  ${tg}

Find NDR using linear search and pps

Find throughput by using RFC2544 linear search with non drop rate.

Arguments: - framesize - L2 Frame Size [B]. Type: integer - start_rate - Initial start rate [pps]. Type: float - step_rate - Step of linear search [pps]. Type: float - topology_type - Topology type. Type: string - min_rate - Lower limit of search [pps]. Type: float - max_rate - Upper limit of search [pps]. Type: float

Return: - No value returned

Example:

| Find NDR using linear search and pps | 64 | 5000000 | | 100000 | 3-node-IPv4 | 100000 | 14880952


${duration}=  Set Variable  10
Set Duration  ${duration}
Set Search Rate Boundaries  ${max_rate}  ${min_rate}
Set Search Linear Step  ${step_rate}
Set Search Frame Size  ${framesize}
Set Search Rate Type pps
Linear Search  ${start_rate}  ${topology_type}
${rate_per_stream}  ${lat}=  Verify Search Result
${tmp}=  Create List  100%NDR  ${lat}
${latency}=  Create List  ${tmp}
${rate_50p}=  Evaluate  int(${rate_per_stream}*0.5)
${lat_50p}=  Measure latency pps  ${duration}  ${rate_50p}  ${framesize}  ${topology_type}
${tmp}=  Create List  50%NDR  ${lat_50p}
Append To List  ${latency}  ${tmp}
${rate_10p}=  Evaluate  int(${rate_per_stream}*0.1)
${lat_10p}=  Measure latency pps  ${duration}  ${rate_10p}  ${framesize}  ${topology_type}
${tmp}=  Create List  10%NDR  ${lat_10p}
Append To List  ${latency}  ${tmp}
Display result of NDR search  ${rate_per_stream}  ${framesize}  2  ${latency}
Traffic should pass with no loss  ${duration}  ${rate_per_stream}pps  ${framesize}  ${topology_type}  fail_on_loss=${False}

Find PDR using linear search and pps

Find throughput by using RFC2544 linear search with partial drop rate with PDR threshold and type specified by parameter.

Arguments: - framesize - L2 Frame Size [B]. Type: integer - start_rate - Initial start rate [pps]. Type: float - step_rate - Step of linear search [pps]. Type: float - topology_type - Topology type. Type: string - min_rate - Lower limit of search [pps]. Type: float - max_rate - Upper limit of search [pps]. Type: float - loss_acceptance - Accepted loss during search. Type: float - loss_acceptance_type - Percentage or frames. Type: string

Example:

| Find PDR using linear search and pps | 64 | 5000000 | 100000 | 3-node-IPv4 | 100000 | 14880952 | 0.5 | percentage


${duration}=  Set Variable  10
Set Duration  ${duration}
Set Search Rate Boundaries  ${max_rate}  ${min_rate}
Set Search Linear Step  ${step_rate}
Set Search Frame Size  ${framesize}
Set Search Rate Type pps
Set Loss Acceptance  ${loss_acceptance}
Run Keyword If  '${loss_acceptance_type}' == 'percentage'  Set Loss Acceptance Type Percentage
Linear Search  ${start_rate}  ${topology_type}
${rate_per_stream}  ${lat}=  Verify Search Result
${tmp}=  Create List  100%PDR  ${lat}
${latency}=  Create List  ${tmp}
Display result of PDR search  ${rate_per_stream}  ${framesize}  2  ${loss_acceptance}  ${loss_acceptance_type}  ${latency}
Traffic should pass with partial loss  ${duration}  ${rate_per_stream}pps  ${framesize}  ${topology_type}  ${loss_acceptance}  ${loss_acceptance_type}  fail_on_loss=${False}

Find NDR using binary search and pps

Find throughput by using RFC2544 binary search with non drop rate.

Arguments: - framesize - L2 Frame Size [B]. Type: integer - binary_min - Lower boundary of search [pps]. Type: float - binary_max - Upper boundary of search [pps]. Type: float - topology_type - Topology type. Type: string - min_rate - Lower limit of search [pps]. Type: float - max_rate - Upper limit of search [pps]. Type: float - threshold - Threshold to stop search [pps]. Type: integer

Example:

| Find NDR using binary search and pps | 64 | 6000000 | 12000000 | 3-node-IPv4 | 100000 | 14880952 | 50000


${duration}=  Set Variable  10
Set Duration  ${duration}
Set Search Rate Boundaries  ${max_rate}  ${min_rate}
Set Search Frame Size  ${framesize}
Set Search Rate Type pps
Set Binary Convergence Threshold  ${threshold}
Binary Search  ${binary_min}  ${binary_max}  ${topology_type}
${rate_per_stream}  ${lat}=  Verify Search Result
${tmp}=  Create List  100%NDR  ${lat}
${latency}=  Create List  ${tmp}
${rate_50p}=  Evaluate  int(${rate_per_stream}*0.5)
${lat_50p}=  Measure latency pps  ${duration}  ${rate_50p}  ${framesize}  ${topology_type}
${tmp}=  Create List  50%NDR  ${lat_50p}
Append To List  ${latency}  ${tmp}
${rate_10p}=  Evaluate  int(${rate_per_stream}*0.1)
${lat_10p}=  Measure latency pps  ${duration}  ${rate_10p}  ${framesize}  ${topology_type}
${tmp}=  Create List  10%NDR  ${lat_10p}
Append To List  ${latency}  ${tmp}
Display result of NDR search  ${rate_per_stream}  ${framesize}  2  ${latency}
Traffic should pass with no loss  ${duration}  ${rate_per_stream}pps  ${framesize}  ${topology_type}  fail_on_loss=${False}

Find PDR using binary search and pps

Find throughput by using RFC2544 binary search with partial drop rate with PDR threshold and type specified by parameter.

Arguments: - framesize - L2 Frame Size [B]. Type: integer - binary_min - Lower boundary of search [pps]. Type: float - binary_max - Upper boundary of search [pps]. Type: float - topology_type - Topology type. Type: string - min_rate - Lower limit of search [pps]. Type: float - max_rate - Upper limit of search [pps]. Type: float - threshold - Threshold to stop search [pps]. Type: integer - loss_acceptance - Accepted loss during search. Type: float - loss_acceptance_type - Percentage or frames. Type: string

Example:

| Find PDR using binary search and pps | 64 | 6000000 | 12000000 | 3-node-IPv4 | 100000 | 14880952 | 50000 | 0.5 | percentage


${duration}=  Set Variable  10
Set Duration  ${duration}
Set Search Rate Boundaries  ${max_rate}  ${min_rate}
Set Search Frame Size  ${framesize}
Set Search Rate Type pps
Set Loss Acceptance  ${loss_acceptance}
Run Keyword If  '${loss_acceptance_type}' == 'percentage'  Set Loss Acceptance Type Percentage
Set Binary Convergence Threshold  ${threshold}
Binary Search  ${binary_min}  ${binary_max}  ${topology_type}
${rate_per_stream}  ${lat}=  Verify Search Result
${tmp}=  Create List  100%PDR  ${lat}
${latency}=  Create List  ${tmp}
Display result of PDR search  ${rate_per_stream}  ${framesize}  2  ${loss_acceptance}  ${loss_acceptance_type}  ${latency}
Traffic should pass with partial loss  ${duration}  ${rate_per_stream}pps  ${framesize}  ${topology_type}  ${loss_acceptance}  ${loss_acceptance_type}  fail_on_loss=${False}

Find NDR using combined search and pps

Find throughput by using RFC2544 combined search (linear+binary) with non drop rate.

Arguments: - framesize - L2 Frame Size [B]. Type: integer - start_rate - Initial start rate [pps]. Type: float - step_rate - Step of linear search [pps]. Type: float - topology_type - Topology type. Type: string - min_rate - Lower limit of search [pps]. Type: float - max_rate - Upper limit of search [pps]. Type: float - threshold - Threshold to stop search [pps]. Type: integer

Example:

| Find NDR using combined search and pps | 64 | 5000000 | 100000 | 3-node-IPv4 | 100000 | 14880952 | 5000


${duration}=  Set Variable  10
Set Duration  ${duration}
Set Search Rate Boundaries  ${max_rate}  ${min_rate}
Set Search Linear Step  ${step_rate}
Set Search Frame Size  ${framesize}
Set Search Rate Type pps
Set Binary Convergence Threshold  ${threshold}
Combined Search  ${start_rate}  ${topology_type}
${rate_per_stream}  ${lat}=  Verify Search Result
${tmp}=  Create List  100%NDR  ${lat}
${latency}=  Create List  ${tmp}
${rate_50p}=  Evaluate  int(${rate_per_stream}*0.5)
${lat_50p}=  Measure latency pps  ${duration}  ${rate_50p}  ${framesize}  ${topology_type}
${tmp}=  Create List  50%NDR  ${lat_50p}
Append To List  ${latency}  ${tmp}
${rate_10p}=  Evaluate  int(${rate_per_stream}*0.1)
${lat_10p}=  Measure latency pps  ${duration}  ${rate_10p}  ${framesize}  ${topology_type}
${tmp}=  Create List  10%NDR  ${lat_10p}
Append To List  ${latency}  ${tmp}
Display result of NDR search  ${rate_per_stream}  ${framesize}  2  ${latency}
Traffic should pass with no loss  ${duration}  ${rate_per_stream}pps  ${framesize}  ${topology_type}  fail_on_loss=${False}

Find PDR using combined search and pps

Find throughput by using RFC2544 combined search (linear+binary) with partial drop rate with PDR threshold and type specified by parameter.

Arguments: - framesize - L2 Frame Size [B]. Type: integer - start_rate - Initial start rate [pps]. Type: float - step_rate - Step of linear search [pps]. Type: float - topology_type - Topology type. Type: string - min_rate - Lower limit of search [pps]. Type: float - max_rate - Upper limit of search [pps]. Type: float - threshold - Threshold to stop search [pps]. Type: integer - loss_acceptance - Accepted loss during search. Type: float - loss_acceptance_type - Percentage or frames. Type: string

Example:

| Find PDR using combined search and pps | 64 | 5000000 | 100000 | 3-node-IPv4 | 100000 | 14880952 | 5000 | 0.5 | percentage


${duration}=  Set Variable  10
Set Duration  ${duration}
Set Search Rate Boundaries  ${max_rate}  ${min_rate}
Set Search Linear Step  ${step_rate}
Set Search Frame Size  ${framesize}
Set Search Rate Type pps
Set Loss Acceptance  ${loss_acceptance}
Run Keyword If  '${loss_acceptance_type}' == 'percentage'  Set Loss Acceptance Type Percentage
Set Binary Convergence Threshold  ${threshold}
Combined Search  ${start_rate}  ${topology_type}
${rate_per_stream}  ${lat}=  Verify Search Result
${tmp}=  Create List  100%PDR  ${lat}
${latency}=  Create List  ${tmp}
Display result of PDR search  ${rate_per_stream}  ${framesize}  2  ${loss_acceptance}  ${loss_acceptance_type}  ${latency}
Traffic should pass with partial loss  ${duration}  ${rate_per_stream}pps  ${framesize}  ${topology_type}  ${loss_acceptance}  ${loss_acceptance_type}  fail_on_loss=${False}

Measure latency pps

Send traffic at specified rate. Measure min/avg/max latency

Arguments: - duration - Duration of traffic run [s]. Type: integer - rate - Rate for sending packets. Type: integer - framesize - L2 Frame Size [B]. Type: integer - topology_type - Topology type. Type: string

Example:

| Measure latency | 10 | 4.0 | 64 | 3-node-IPv4


Return From Keyword If  ${rate} <= 10000  ${-1}
${ret}=  For DPDK Performance Test
Run Keyword If  ${ret}==${FALSE}  Clear all counters on all DUTs
Send traffic on tg  ${duration}  ${rate}pps  ${framesize}  ${topology_type}  warmup_time=0
Run Keyword If  ${ret}==${FALSE}  Show statistics on all DUTs
Run keyword and return  Get latency

Traffic should pass with no loss

Send traffic at specified rate. No packet loss is accepted at loss evaluation.

Arguments: - duration - Duration of traffic run [s]. Type: integer - rate - Rate for sending packets. Type: string - framesize - L2 Frame Size [B]. Type: integer - topology_type - Topology type. Type: string

Example:

| Traffic should pass with no loss | 10 | 4.0mpps | 64 | 3-node-IPv4


Clear and show runtime counters with running traffic  ${duration}  ${rate}  ${framesize}  ${topology_type}
${ret}=  For DPDK Performance Test
Run Keyword If  ${ret}==${FALSE}  Clear all counters on all DUTs
Send traffic on tg  ${duration}  ${rate}  ${framesize}  ${topology_type}  warmup_time=0
Run Keyword If  ${ret}==${FALSE}  Show statistics on all DUTs
Run Keyword If  ${fail_on_loss}  No traffic loss occurred

Traffic should pass with partial loss

Send traffic at specified rate. Partial packet loss is accepted within loss acceptance value specified as argument.

Arguments: - duration - Duration of traffic run [s]. Type: integer - rate - Rate for sending packets. Type: string - framesize - L2 Frame Size [B]. Type: integer - topology_type - Topology type. Type: string - loss_acceptance - Accepted loss during search. Type: float - loss_acceptance_type - Percentage or frames. Type: string

Example:

| Traffic should pass with partial loss | 10 | 4.0mpps | 64 | 3-node-IPv4 | 0.5 | percentage


Clear and show runtime counters with running traffic  ${duration}  ${rate}  ${framesize}  ${topology_type}
${ret}=  For DPDK Performance Test
Run Keyword If  ${ret}==${FALSE}  Clear all counters on all DUTs
Send traffic on tg  ${duration}  ${rate}  ${framesize}  ${topology_type}  warmup_time=0
Run Keyword If  ${ret}==${FALSE}  Show statistics on all DUTs
Run Keyword If  ${fail_on_loss}  Partial traffic loss accepted  ${loss_acceptance}  ${loss_acceptance_type}

Clear and show runtime counters with running traffic

Start traffic at specified rate then clear runtime counters on all DUTs. Wait for specified amount of time and capture runtime counters on all DUTs. Finally stop traffic

Arguments: - duration - Duration of traffic run [s]. Type: integer - rate - Rate for sending packets. Type: string - framesize - L2 Frame Size [B]. Type: integer - topology_type - Topology type. Type: string

Example:

| Traffic should pass with partial loss | 10 | 4.0mpps | 64 | 3-node-IPv4 | 0.5 | percentage


Send traffic on tg  -1  ${rate}  ${framesize}  ${topology_type}  warmup_time=0  async_call=${True}  latency=${False}
${ret}=  For DPDK Performance Test
Run Keyword If  ${ret}==${FALSE}  Clear runtime counters on all DUTs
Sleep  ${duration}
Run Keyword If  ${ret}==${FALSE}  Show runtime counters on all DUTs
Stop traffic on tg

Guest VM with dpdk-testpmd connected via vhost-user is setup

Start QEMU guest with two vhost-user interfaces and interconnecting DPDK testpmd. Qemu Guest is using 5 cores pinned to physical cores 5-9, and 2048M. Testpmd is using 5 cores (1 main core and 4 cores dedicated to io) mem-channel=4, txq/rxq=256, burst=64, disable-hw-vlan, disable-rss, driver usr/lib/librte_pmd_virtio.so and fwd mode is io.

Arguments: - dut_node - DUT node to start guest VM on. Type: dictionary - sock1 - Socket path for first Vhost-User interface. Type: string - sock2 - Socket path for second Vhost-User interface. Type: string - vm_name - QemuUtil instance name. Type: string - skip - number of cpus which will be skipped. Type: int - count - number of cpus which will be allocated for qemu. Type: int

Example:

| Guest VM with dpdk-testpmd connected via vhost-user is setup | ${nodes[‘DUT1’]} | /tmp/sock1 | /tmp/sock2 | DUT1_VM | ${5} | ${5} |


Import Library  resources.libraries.python.QemuUtils  WITH NAME  ${vm_name}
${dut_numa}=  Get interfaces numa node  ${dut_node}  ${dut1_if1}  ${dut1_if2}
${cpus}=  Cpu list per node  ${dut_node}  ${dut_numa}
${end_idx}=  Evaluate  ${skip} + ${count}
${qemu_cpus}=  Get Slice From List  ${cpus}  ${skip}  ${end_idx}
Run keyword  ${vm_name}.Qemu Add Vhost User If  ${sock1}
Run keyword  ${vm_name}.Qemu Add Vhost User If  ${sock2}
Run keyword  ${vm_name}.Qemu Set Node  ${dut_node}
Run keyword  ${vm_name}.Qemu Set Smp  ${count}  ${count}  1  1
Run keyword  ${vm_name}.Qemu Set Mem Size  2048
Run keyword  ${vm_name}.Qemu Set Disk Image  ${glob_vm_image}
${vm}=  Run keyword  ${vm_name}.Qemu Start
Run keyword  ${vm_name}.Qemu Set Affinity  @{qemu_cpus}
Run keyword  ${vm_name}.Qemu Set Scheduler Policy
Dpdk Testpmd Start  ${vm}  eal_coremask=0x1f  eal_mem_channels=4  pmd_fwd_mode=io  pmd_disable_hw_vlan=${True}
Return From Keyword  ${vm}

Guest VM with dpdk-testpmd-mac connected via vhost-user is setup

Start QEMU guest with two vhost-user interfaces and interconnecting DPDK testpmd. Qemu Guest is using 5 cores pinned to physical cores 5-9 and 2048M. Testpmd is using 5 cores (1 main core and 4 cores dedicated to io) mem-channel=4, txq/rxq=256, burst=64, disable-hw-vlan, disable-rss, driver usr/lib/librte_pmd_virtio.so and fwd mode is mac rewrite.

Arguments: - dut_node - DUT node to start guest VM on. Type: dictionary - sock1 - Socket path for first Vhost-User interface. Type: string - sock2 - Socket path for second Vhost-User interface. Type: string - vm_name - QemuUtil instance name. Type: string - eth0_mac - MAC address of first Vhost interface. Type: string - eth1_mac - MAC address of second Vhost interface. Type: string - skip - number of cpus which will be skipped. Type: int - count - number of cpus which will be allocated for qemu. Type: int

Example:

| Guest VM with dpdk-testpmd for Vhost L2BD forwarding is setup | ${nodes[‘DUT1’]} | /tmp/sock1 | /tmp/sock2 | DUT1_VM | 00:00:00:00:00:01 | 00:00:00:00:00:02 | ${5} | ${5} |


Import Library  resources.libraries.python.QemuUtils  WITH NAME  ${vm_name}
${dut_numa}=  Get interfaces numa node  ${dut_node}  ${dut1_if1}  ${dut1_if2}
${cpus}=  Cpu list per node  ${dut_node}  ${dut_numa}
${end_idx}=  Evaluate  ${skip} + ${count}
${qemu_cpus}=  Get Slice From List  ${cpus}  ${skip}  ${end_idx}
Run keyword  ${vm_name}.Qemu Add Vhost User If  ${sock1}
Run keyword  ${vm_name}.Qemu Add Vhost User If  ${sock2}
Run keyword  ${vm_name}.Qemu Set Node  ${dut_node}
Run keyword  ${vm_name}.Qemu Set Smp  ${count}  ${count}  1  1
Run keyword  ${vm_name}.Qemu Set Mem Size  2048
Run keyword  ${vm_name}.Qemu Set Disk Image  ${glob_vm_image}
${vm}=  Run keyword  ${vm_name}.Qemu Start
Run keyword  ${vm_name}.Qemu Set Affinity  @{qemu_cpus}
Run keyword  ${vm_name}.Qemu Set Scheduler Policy
Dpdk Testpmd Start  ${vm}  eal_coremask=0x1f  eal_mem_channels=4  pmd_fwd_mode=mac  pmd_eth_peer_0=0,${eth0_mac}  pmd_eth_peer_1=1,${eth1_mac}  pmd_disable_hw_vlan=${True}
Return From Keyword  ${vm}

Guest VM with Linux Bridge connected via vhost-user is setup

Start QEMU guest with two vhost-user interfaces and interconnecting linux bridge. Qemu Guest is using 3 cores pinned to physical cores 5, 6, 7 and 2048M.

Arguments: - dut_node - DUT node to start guest VM on. Type: dictionary - sock1 - Socket path for first Vhost-User interface. Type: string - sock2 - Socket path for second Vhost-User interface. Type: string - vm_name - QemuUtil instance name. Type: string - skip - number of cpus which will be skipped. Type: int - count - number of cpus which will be allocated for qemu. Type: int

Example:

| Guest VM with Linux Bridge connected via vhost-user is setup | ${nodes[‘DUT1’]} | /tmp/sock1 | /tmp/sock2 | DUT1_VM | ${5} | ${5} |


Import Library  resources.libraries.python.QemuUtils  WITH NAME  ${vm_name}
${dut_numa}=  Get interfaces numa node  ${dut_node}  ${dut1_if1}  ${dut1_if2}
${cpus}=  Cpu list per node  ${dut_node}  ${dut_numa}
${end_idx}=  Evaluate  ${skip} + ${count}
${qemu_cpus}=  Get Slice From List  ${cpus}  ${skip}  ${end_idx}
Run keyword  ${vm_name}.Qemu Add Vhost User If  ${sock1}
Run keyword  ${vm_name}.Qemu Add Vhost User If  ${sock2}
Run keyword  ${vm_name}.Qemu Set Node  ${dut_node}
Run keyword  ${vm_name}.Qemu Set Smp  ${count}  ${count}  1  1
Run keyword  ${vm_name}.Qemu Set Mem Size  2048
Run keyword  ${vm_name}.Qemu Set Disk Image  ${glob_vm_image}
${vm}=  Run keyword  ${vm_name}.Qemu Start
Run keyword  ${vm_name}.Qemu Set Affinity  @{qemu_cpus}
Run keyword  ${vm_name}.Qemu Set Scheduler Policy
${br}=  Set Variable  br0
${vhost1}=  Get Vhost User If Name By Sock  ${vm}  ${sock1}
${vhost2}=  Get Vhost User If Name By Sock  ${vm}  ${sock2}
Linux Add Bridge  ${vm}  ${br}  ${vhost1}  ${vhost2}
Set Interface State  ${vm}  ${vhost1}  up  if_type=name
Set Interface State  ${vm}  ${vhost2}  up  if_type=name
Set Interface State  ${vm}  ${br}  up  if_type=name
Return From Keyword  ${vm}

Guest VM with dpdk-testpmd Teardown

Stop all qemu processes with dpdk-testpmd running on ${dut_node}. Argument is dictionary of all qemu nodes running with its names. Dpdk-testpmd is stopped gracefully with printing stats.

Arguments: - dut_node - Node where to clean qemu. Type: dictionary - dut_vm_refs - VM references on node. Type: dictionary

Example:

| Guest VM with dpdk-testpmd Teardown | ${node[‘DUT1’]} | ${dut_vm_refs} |


: FOR  ${vm_name}  IN  @{dut_vm_refs}
\    ${vm}=  Get From Dictionary  ${dut_vm_refs}  ${vm_name}
\    Dpdk Testpmd Stop  ${vm}
\    Run Keyword  ${vm_name}.Qemu Set Node  ${dut_node}
\    Run Keyword  ${vm_name}.Qemu Kill
\    Run Keyword  ${vm_name}.Qemu Clear Socks

Guest VM Teardown

Stop all qemu processes running on ${dut_node}. Argument is dictionary of all qemu nodes running with its names.

Arguments: - dut_node - Node where to clean qemu. Type: dictionary - dut_vm_refs - VM references on node. Type: dictionary

Example:

| Guest VM Teardown | ${node[‘DUT1’]} | ${dut_vm_refs} |


: FOR  ${vm_name}  IN  @{dut_vm_refs}
\    ${vm}=  Get From Dictionary  ${dut_vm_refs}  ${vm_name}
\    Run Keyword  ${vm_name}.Qemu Set Node  ${dut_node}
\    Run Keyword  ${vm_name}.Qemu Kill
\    Run Keyword  ${vm_name}.Qemu Clear Socks

Lisp IPv4 forwarding initialized in a 3-node circular topology

Custom setup of IPv4 addresses on all DUT nodes and TG Don`t set route.

Arguments: -${dut1_dut2_address} - Ip address from DUT1 to DUT2. Type: string -${dut1_tg_address} - Ip address from DUT1 to tg. Type: string -${dut2_dut1_address} - Ip address from DUT2 to DUT1. Type: string -${dut1_tg_address} - Ip address from DUT1 to tg. Type: string -${duts_prefix} - ip prefix. Type: int

Return: - No value returned

Example: | Lisp IPv4 forwarding initialized in a 3-node circular topology | ${dut1_dut2_address} | ${dut1_tg_address} | ${dut2_dut1_address} | ${dut2_tg_address} | ${duts_prefix} |


Set Interface State  ${dut1}  ${dut1_if1}  up
Set Interface State  ${dut1}  ${dut1_if2}  up
Set Interface State  ${dut2}  ${dut2_if1}  up
Set Interface State  ${dut2}  ${dut2_if2}  up
${tg1_if1_mac}=  Get Interface MAC  ${tg}  ${tg_if1}
${tg1_if2_mac}=  Get Interface MAC  ${tg}  ${tg_if2}
${dut1_if2_mac}=  Get Interface MAC  ${dut1}  ${dut1_if2}
${dut2_if1_mac}=  Get Interface MAC  ${dut2}  ${dut2_if1}
dut1_v4.set_arp  ${dut1_if1}  10.10.10.2  ${tg1_if1_mac}
dut1_v4.set_arp  ${dut1_if2}  ${dut2_dut1_address}  ${dut2_if1_mac}
dut2_v4.set_arp  ${dut2_if1}  ${dut1_dut2_address}  ${dut1_if2_mac}
dut2_v4.set_arp  ${dut2_if2}  20.20.20.2  ${tg1_if2_mac}
dut1_v4.set_ip  ${dut1_if1}  ${dut1_tg_address}  ${duts_prefix}
dut1_v4.set_ip  ${dut1_if2}  ${dut1_dut2_address}  ${duts_prefix}
dut2_v4.set_ip  ${dut2_if1}  ${dut2_dut1_address}  ${duts_prefix}
dut2_v4.set_ip  ${dut2_if2}  ${dut2_tg_address}  ${duts_prefix}
All Vpp Interfaces Ready Wait  ${nodes}

Lisp IPv6 forwarding initialized in a 3-node circular topology

Custom setup of IPv6 topology on all DUT nodes Don`t set route.

Arguments: -${dut1_dut2_address} - Ip address from DUT1 to DUT2. Type: string -${dut1_tg_address} - Ip address from DUT1 to tg. Type: string -${dut2_dut1_address} - Ip address from DUT2 to DUT1. Type: string -${dut1_tg_address} - Ip address from DUT1 to tg. Type: string -${duts_prefix} - ip prefix. Type: int

Return: - No value returned

Example: | Lisp IPv6 forwarding initialized in a 3-node circular topology | ${dut1_dut2_address} | ${dut1_tg_address} | ${dut2_dut1_address} | ${dut2_tg_address} | ${duts_prefix} |


${tg1_if1_mac}=  Get Interface MAC  ${tg}  ${tg_if1}
${tg1_if2_mac}=  Get Interface MAC  ${tg}  ${tg_if2}
${dut1_if2_mac}=  Get Interface MAC  ${dut1}  ${dut1_if2}
${dut2_if1_mac}=  Get Interface MAC  ${dut2}  ${dut2_if1}
VPP Set If IPv6 Addr  ${dut1}  ${dut1_if1}  ${dut1_tg_address}  ${prefix}
VPP Set If IPv6 Addr  ${dut1}  ${dut1_if2}  ${dut1_dut2_address}  ${prefix}
VPP Set If IPv6 Addr  ${dut2}  ${dut2_if1}  ${dut2_dut1_address}  ${prefix}
VPP Set If IPv6 Addr  ${dut2}  ${dut2_if2}  ${dut2_tg_address}  ${prefix}
Vpp nodes ra suppress link layer  ${nodes}
Add Ip Neighbor  ${dut1}  ${dut1_if1}  2001:1::2  ${tg1_if1_mac}
Add Ip Neighbor  ${dut2}  ${dut2_if2}  2001:2::2  ${tg1_if2_mac}
Add Ip Neighbor  ${dut1}  ${dut1_if2}  ${dut2_dut1_address}  ${dut2_if1_mac}
Add Ip Neighbor  ${dut2}  ${dut2_if1}  ${dut1_dut2_address}  ${dut1_if2_mac}

Lisp IPv4 over IPv6 forwarding initialized in a 3-node circular topology

Custom setup of IPv4 over IPv6 topology on all DUT nodes Don`t set route.

Arguments: -${dut1_dut2_ip6_address} - IPv6 address from DUT1 to DUT2. Type: string -${dut1_tg_ip4_address} - IPv4 address from DUT1 to tg. Type: string -${dut2_dut1_ip6_address} - IPv6 address from DUT2 to DUT1. Type: string -${dut1_tg_ip4_address} - IPv4 address from DUT1 to tg. Type: string -${prefix4} - IPv4 prefix. Type: int -${prefix6} - IPv6 prefix. Type: int

Return: - No value returned

Example: | Lisp IPv4 over IPv6 forwarding initialized in a 3-node circular topology | ${dut1_dut2_ip6_address} | ${dut1_tg_ip4_address} | ${dut2_dut1_ip6_address} | ${dut2_tg_ip4_address} | ${prefix4} | ${prefix6} |


Set Interface State  ${dut1}  ${dut1_if1}  up
Set Interface State  ${dut1}  ${dut1_if2}  up
Set Interface State  ${dut2}  ${dut2_if1}  up
Set Interface State  ${dut2}  ${dut2_if2}  up
${tg1_if1_mac}=  Get Interface MAC  ${tg}  ${tg_if1}
${tg1_if2_mac}=  Get Interface MAC  ${tg}  ${tg_if2}
${dut1_if2_mac}=  Get Interface MAC  ${dut1}  ${dut1_if2}
${dut2_if1_mac}=  Get Interface MAC  ${dut2}  ${dut2_if1}
dut1_v4.set_ip  ${dut1_if1}  ${dut1_tg_ip4_address}  ${prefix4}
VPP Set If IPv6 Addr  ${dut1}  ${dut1_if2}  ${dut1_dut2_ip6_address}  ${prefix6}
VPP Set If IPv6 Addr  ${dut2}  ${dut2_if1}  ${dut2_dut1_ip6_address}  ${prefix6}
dut2_v4.set_ip  ${dut2_if2}  ${dut2_tg_ip4_address}  ${prefix4}
Vpp nodes ra suppress link layer  ${nodes}
dut1_v4.set_arp  ${dut1_if1}  10.10.10.2  ${tg1_if1_mac}
dut2_v4.set_arp  ${dut2_if2}  20.20.20.2  ${tg1_if2_mac}
Add Ip Neighbor  ${dut1}  ${dut1_if2}  ${dut2_dut1_ip6_address}  ${dut2_if1_mac}
Add Ip Neighbor  ${dut2}  ${dut2_if1}  ${dut1_dut2_ip6_address}  ${dut1_if2_mac}

Lisp IPv6 over IPv4 forwarding initialized in a 3-node circular topology

Custom setup of IPv4 over IPv6 topology on all DUT nodes Don`t set route.

Arguments: -${dut1_dut2_ip4_address} - IPv4 address from DUT1 to DUT2. Type: string -${dut1_tg_ip6_address} - IPv6 address from DUT1 to tg. Type: string -${dut2_dut1_ip4_address} - IPv4 address from DUT2 to DUT1. Type: string -${dut1_tg_ip6_address} - IPv6 address from DUT1 to tg. Type: string -${prefix4} - IPv4 prefix. Type: int -${prefix6} - IPv6 prefix. Type: int

Return: - No value returned

Example: | Lisp IPv6 over IPv4 forwarding initialized in a 3-node circular topology | ${dut1_dut2_ip4_address} | ${dut1_tg_ip6_address} | ${dut2_dut1_ip4_address} | ${dut2_tg_ip6_address} | ${prefix6} | ${prefix4} |


Set Interface State  ${dut1}  ${dut1_if1}  up
Set Interface State  ${dut1}  ${dut1_if2}  up
Set Interface State  ${dut2}  ${dut2_if1}  up
Set Interface State  ${dut2}  ${dut2_if2}  up
${tg1_if1_mac}=  Get Interface MAC  ${tg}  ${tg_if1}
${tg1_if2_mac}=  Get Interface MAC  ${tg}  ${tg_if2}
${dut1_if2_mac}=  Get Interface MAC  ${dut1}  ${dut1_if2}
${dut2_if1_mac}=  Get Interface MAC  ${dut2}  ${dut2_if1}
VPP Set If IPv6 Addr  ${dut1}  ${dut1_if1}  ${dut1_tg_ip6_address}  ${prefix6}
dut1_v4.set_ip  ${dut1_if2}  ${dut1_dut2_ip4_address}  ${prefix4}
dut2_v4.set_ip  ${dut2_if1}  ${dut2_dut1_ip4_address}  ${prefix4}
VPP Set If IPv6 Addr  ${dut2}  ${dut2_if2}  ${dut2_tg_ip6_address}  ${prefix6}
Vpp nodes ra suppress link layer  ${nodes}
Add Ip Neighbor  ${dut1}  ${dut1_if1}  2001:1::2  ${tg1_if1_mac}
Add Ip Neighbor  ${dut2}  ${dut2_if2}  2001:2::2  ${tg1_if2_mac}
dut1_v4.set_arp  ${dut1_if2}  ${dut2_dut1_ip4_address}  ${dut2_if1_mac}
dut2_v4.set_arp  ${dut2_if1}  ${dut1_dut2_ip4_address}  ${dut1_if2_mac}

DPDK 2-node Performance Suite Setup with DUT’s NIC model

Updates interfaces on all nodes and setup global variables used in test cases based on interface model provided as an argument. Initializes traffic generator. Initializes DPDK test environment.

Arguments: - topology_type - Topology type. Type: string - nic_model - Interface model. Type: string

Example:

| DPDK 2-node Performance Suite Setup with DUT’s NIC model | L2 | Intel-X520-DA2 |


Setup performance global Variables
2-node circular Topology Variables Setup with DUT interface model  ${nic_model}
Initialize traffic generator  ${tg}  ${tg_if1}  ${tg_if2}  ${dut1}  ${dut1_if1}  ${dut1}  ${dut1_if2}  ${topology_type}
Initialize DPDK Environment  ${dut1}  ${dut1_if1}  ${dut1_if2}

DPDK 3-node Performance Suite Setup with DUT’s NIC model

Updates interfaces on all nodes and setup global variables used in test cases based on interface model provided as an argument. Initializes traffic generator. Initializes DPDK test environment.

Arguments: - topology_type - Topology type. Type: string - nic_model - Interface model. Type: string

Example:

| 3-node Performance Suite Setup | L2 | Intel-X520-DA2 |


Setup performance global Variables
3-node circular Topology Variables Setup with DUT interface model  ${nic_model}
Initialize traffic generator  ${tg}  ${tg_if1}  ${tg_if2}  ${dut1}  ${dut1_if1}  ${dut2}  ${dut2_if2}  ${topology_type}
Initialize DPDK Environment  ${dut1}  ${dut1_if1}  ${dut1_if2}
Initialize DPDK Environment  ${dut2}  ${dut2_if1}  ${dut2_if2}

DPDK 3-node Performance Suite Teardown

Suite teardown phase with traffic generator teardown. Cleanup DPDK test environment.


Teardown traffic generator  ${tg}
Cleanup DPDK Environment  ${dut1}  ${dut1_if1}  ${dut1_if2}
Cleanup DPDK Environment  ${dut2}  ${dut2_if1}  ${dut2_if2}

DPDK 2-node Performance Suite Teardown

Suite teardown phase with traffic generator teardown. Cleanup DPDK test environment.


Teardown traffic generator  ${tg}
Cleanup DPDK Environment  ${dut1}  ${dut1_if1}  ${dut1_if2}

For DPDK Performance Test

Return TRUE if variable DPDK_TEST exist, otherwise FALSE.


${ret}  ${tmp}=  Run Keyword And Ignore Error  Variable Should Exist  ${DPDK_TEST}
Return From Keyword If  "${ret}" == "PASS"  ${TRUE}
Return From Keyword  ${FALSE}

policer module

Setup Topology for IPv4 policer testing

Setup topology for IPv4 policer testing.

_NOTE:_ This KW sets following test case variables: - dut_to_tg_if1_ip - DUT first interface IP address. Type: string - dut_to_tg_if2_ip - DUT second interface IP address. Type: string - tg_to_dut_if1_ip - TG first interface IP address. Type: string - tg_to_dut_if2_ip - TG second interface IP address. Type: string


Path for 2-node testing is set  ${nodes['TG']}  ${nodes['DUT1']}  ${nodes['TG']}
Interfaces in 2-node path are up
Set Interface Address  ${dut_node}  ${dut_to_tg_if1}  ${dut_to_tg_if1_ip4}  ${ip4_plen}
Set Interface Address  ${dut_node}  ${dut_to_tg_if2}  ${dut_to_tg_if2_ip4}  ${ip4_plen}
dut1_v4.Set ARP  ${dut_to_tg_if2}  ${tg_to_dut_if2_ip4}  ${tg_to_dut_if2_mac}
Set Test Variable  ${dut_to_tg_if1_ip}  ${dut_to_tg_if1_ip4}
Set Test Variable  ${dut_to_tg_if2_ip}  ${dut_to_tg_if2_ip4}
Set Test Variable  ${tg_to_dut_if1_ip}  ${tg_to_dut_if1_ip4}
Set Test Variable  ${tg_to_dut_if2_ip}  ${tg_to_dut_if2_ip4}

Setup Topology for IPv6 policer testing

Setup topology for IPv6 policer testing.

_NOTE:_ This KW sets following test case variables: - dut_to_tg_if1_ip - DUT first interface IP address. Type: string - dut_to_tg_if2_ip - DUT second interface IP address. Type: string - tg_to_dut_if1_ip - TG first interface IP address. Type: string - tg_to_dut_if2_ip - TG second interface IP address. Type: string


Path for 2-node testing is set  ${nodes['TG']}  ${nodes['DUT1']}  ${nodes['TG']}
Interfaces in 2-node path are up
Vpp Set If IPv6 Addr  ${dut_node}  ${dut_to_tg_if1}  ${dut_to_tg_if1_ip6}  ${ip6_plen}
Vpp Set If IPv6 Addr  ${dut_node}  ${dut_to_tg_if2}  ${dut_to_tg_if2_ip6}  ${ip6_plen}
Add IP Neighbor  ${dut_node}  ${dut_to_tg_if2}  ${tg_to_dut_if2_ip6}  ${tg_to_dut_if2_mac}
Vpp All RA Suppress Link Layer  ${nodes}
Set Test Variable  ${dut_to_tg_if1_ip}  ${dut_to_tg_if1_ip6}
Set Test Variable  ${dut_to_tg_if2_ip}  ${dut_to_tg_if2_ip6}
Set Test Variable  ${tg_to_dut_if1_ip}  ${tg_to_dut_if1_ip6}
Set Test Variable  ${tg_to_dut_if2_ip}  ${tg_to_dut_if2_ip6}

Send Packet and Verify Marking

Send packet and verify DSCP of the received packet.

Arguments: - node - TG node. Type: dictionary - tx_if - TG transmit interface. Type: string - rx_if - TG receive interface. Type: string - src_mac - Packet source MAC. Type: string - dst_mac - Packet destination MAC. Type: string - src_ip - Packet source IP address. Type: string - dst_ip - Packet destination IP address. Type: string - dscp - DSCP value to verify. Type: enum

Example: | ${dscp}= | DSCP AF22 | | Send Packet and Verify Marking | ${nodes[‘TG’]} | eth1 | eth2 | 08:00:27:87:4d:f7 | 52:54:00:d4:d8:22 | 192.168.122.2 | 192.168.122.1 | ${dscp} |


${tx_if_name}=  Get Interface Name  ${node}  ${tx_if}
${rx_if_name}=  Get Interface Name  ${node}  ${rx_if}
${args}=  Traffic Script Gen Arg  ${rx_if_name}  ${tx_if_name}  ${src_mac}  ${dst_mac}  ${src_ip}  ${dst_ip}
${dscp_num}=  Get DSCP Num Value  ${dscp}
${args}=  Set Variable  ${args} --dscp ${dscp_num}
Run Traffic Script On Node  policer.py  ${node}  ${args}

qemu module

Exist QEMU Build List

Return TRUE if variable QEMU_BUILD exist, otherwise FALSE


${ret}  ${tmp}=  Run Keyword And Ignore Error  Variable Should Exist  @{QEMU_BUILD}
Return From Keyword If  "${ret}" == "PASS"  ${TRUE}
Return From Keyword  ${FALSE}

Is QEMU Ready on Node

Check if QEMU was built on the node before


${ret}=  Exist QEMU Build List
Return From Keyword If  ${ret} == ${FALSE}  ${FALSE}
${ret}  ${tmp}=  Run Keyword And Ignore Error  Should Contain  ${QEMU_BUILD}  ${node['host']}
Return From Keyword If  "${ret}" == "PASS"  ${TRUE}
Return From Keyword  ${FALSE}

Add Node to QEMU Build List

Add node to the list of nodes with builded QEMU (global variable QEMU_BUILD)


${ret}=  Exist QEMU Build List
Run Keyword If  ${ret} == ${TRUE}  Append To List  ${QEMU_BUILD}  ${node['host']}  ELSE  Set Global Variable  @{QEMU_BUILD}  ${node['host']}

Build QEMU on Node

Build QEMU from sources on the Node. Nodes with successful QEMU build are stored in global variable list QEMU_BUILD


${ready}=  Is QEMU Ready on Node  ${node}
Return From Keyword If  ${ready} == ${TRUE}
Build QEMU  ${node}
Add Node to QEMU Build List  ${node}

Build QEMU on all DUTs

Build QEMU from sources on all DUTs. Nodes with successful QEMU build are stored in global variable list QEMU_BUILD


${duts}=  Get Matches  ${nodes}  DUT*
: FOR  ${dut}  IN  @{duts}
\    Build QEMU on Node  ${nodes['${dut}']}

Stop and Clear QEMU

Stop QEMU, clear used sockets and close SSH connection running on ${dut}, ${vm} is VM node info dictionary returned by qemu_start or None.


Qemu Set Node  ${dut}
Qemu Kill
Qemu Clear Socks
Run Keyword If  ${vm} is not None  Disconnect  ${vm}

Kill Qemu on all DUTs

Kill QEMU processes on all DUTs.


${duts}=  Get Matches  ${nodes}  DUT*
: FOR  ${dut}  IN  @{duts}
\    Qemu Set Node  ${nodes['${dut}']}
\    Qemu Kill

tagging module

VLAN subinterfaces initialized on 3-node topology

Create two subinterfaces on DUTs.

Arguments: - DUT1 - Node to add sub-interface. - INT1 - Interface name on which create sub-interface. - DUT2 - Node to add sub-interface. - INT2 - Interface name on which create sub-interface. - SUB_ID - ID of the sub-interface to be created. - OUTER_VLAN_ID - Outer VLAN ID. - INNER_VLAN_ID - Inner VLAN ID. - TYPE_SUBIF - Type of sub-interface.

_Set testcase variables with name and index of created interfaces:_ - subif_name_1 - subif_index_1 - subif_name_2 - subif_index_2


${INT1_name}=  Get interface name  ${DUT1}  ${INT1}
${subif_name_1}  ${subif_index_1}=  Create subinterface  ${DUT1}  ${INT1_name}  ${SUB_ID}  ${OUTER_VLAN_ID}  ${INNER_VLAN_ID}  ${TYPE_SUBIF}
${INT2_name}=  Get interface name  ${DUT2}  ${INT2}
${subif_name_2}  ${subif_index_2}=  Create subinterface  ${DUT2}  ${INT2_name}  ${SUB_ID}  ${OUTER_VLAN_ID}  ${INNER_VLAN_ID}  ${TYPE_SUBIF}
Set Interface State  ${DUT1}  ${subif_index_1}  up
Set Interface State  ${DUT2}  ${subif_index_2}  up
Set Test Variable  ${subif_name_1}
Set Test Variable  ${subif_index_1}
Set Test Variable  ${subif_name_2}
Set Test Variable  ${subif_index_2}

VLAN dot1q subinterfaces initialized on 3-node topology

Create two dot1q subinterfaces on DUTs.

Arguments: - DUT1 - Node to add sub-interface. - INT1 - Interface name on which create VLAN sub-interface. - DUT2 - Node to add sub-interface. - INT2 - Interface name on which create VLAN sub-interface. - SUB_ID - ID of the sub-interface to be created.

_Set testcase variables with name and index of created interfaces:_ - subif_name_1 - subif_index_1 - subif_name_2 - subif_index_2

Example:

| VLAN dot1q subinterfaces initialized on 3-node topology | ${nodes[‘DUT1’]} | ${dut1_if2} | ${nodes[‘DUT2’]} | ${dut1_if2} | 10 |


${INT1_NAME}=  Get interface name  ${DUT1}  ${INT1}
${INT2_NAME}=  Get interface name  ${DUT2}  ${INT2}
${subif_name_1}  ${subif_index_1}=  Create Vlan Subinterface  ${DUT1}  ${INT1_NAME}  ${SUB_ID}
${subif_name_2}  ${subif_index_2}=  Create Vlan Subinterface  ${DUT2}  ${INT2_NAME}  ${SUB_ID}
Set Interface State  ${DUT1}  ${subif_index_1}  up
Set Interface State  ${DUT2}  ${subif_index_2}  up
Set Test Variable  ${subif_name_1}
Set Test Variable  ${subif_index_1}
Set Test Variable  ${subif_name_2}
Set Test Variable  ${subif_index_2}

L2 tag rewrite method setup on interfaces

Setup tag rewrite on sub-interfaces on DUTs.

Arguments: - DUT1 - Node to rewrite tags. - SUB_INT1 - Interface on which rewrite tags. - DUT2 - Node to rewrite tags. - SUB_INT2 - Interface on which rewrite tags. - TAG_REWRITE_METHOD - Method of tag rewrite.


L2 Vlan tag rewrite  ${DUT1}  ${SUB_INT1}  ${TAG_REWRITE_METHOD}
L2 Vlan tag rewrite  ${DUT2}  ${SUB_INT2}  ${TAG_REWRITE_METHOD}

Interfaces and VLAN sub-interfaces inter-connected using L2-xconnect

Add interface and subinterface to bidirectional L2-xconnect on DUTs.

Arguments: - DUT1 - Node to add bidirectional cross-connect. - INT1 - Interface to add to the cross-connect. - SUB_INT1 - Sub-interface to add to the cross-connect. - DUT2 - Node to add bidirectional cross-connect. - INT2 - Interface to add to the cross-connect. - SUB_INT2 - Sub-interface to add to the cross-connect.


L2 setup xconnect on DUT  ${DUT1}  ${INT1}  ${SUB_INT1}
L2 setup xconnect on DUT  ${DUT2}  ${INT2}  ${SUB_INT2}

Vlan Subinterface Created

Create VLAN sub-interface on DUT.

Arguments: - dut_node - Node to add VLAN sub-intreface. Type: dictionary - interface - Interface to create VLAN sub-interface. Type: string - vlan_id - VLAN ID. Type: integer

Return: - vlan_name - VLAN sub-interface name. Type: string - vlan_index - VLAN sub-interface SW index. Type: integer

Example:

| Vlan Subinterface Created | ${nodes[‘DUT1’]} | port3 | 100 |


${interface_name}=  Get interface name  ${dut_node}  ${interface}
${vlan_name}  ${vlan_index}=  Create Vlan Subinterface  ${dut_node}  ${interface_name}  ${vlan_id}

Tagged Subinterface Created

Create tagged sub-interface on DUT. Type of tagged sub-intreface depends on type_subif value: - one_tag -> VLAN - two_tags -> QinQ VLAN - two_tags dot1ad - DOT1AD

Arguments: - dut_node - Node to add VLAN sub-intreface. Type: dictionary - interface - Interface to create tagged sub-interface. Type: string - subif_id - Sub-interface ID. Type: integer - outer_vlan_id - VLAN (outer) ID (Optional). Type: integer - inner_vlan_id - VLAN inner ID (Optional). Type: integer - type_subif - Sub-interface type (Optional). Type: string

Return: - subif_name - Sub-interface name. Type: string - subif_index - Sub-interface SW index. Type: integer

Example:

| Tagged Subinterface Created | ${nodes[‘DUT1’]} | port1 | 10 | outer_vlan_id=100 | inner_vlan_id=200 | type_subif=two_tags dot1ad |


${interface_name}=  Get interface name  ${dut_node}  ${interface}
${subif_name}  ${subif_index}=  Create Subinterface  ${dut_node}  ${interface_name}  ${subif_id}  outer_vlan_id=${outer_vlan_id}  inner_vlan_id=${inner_vlan_id}  type_subif=${type_subif}

L2 Tag Rewrite Method Is Set On Interface

Set L2 tag rewrite on (sub-)interface on DUT

Arguments: - dut_node - Node to set L2 tag rewrite method. Type: dictionary - interface - (Sub-)interface name or SW index to set L2 tag rewrite method. Type: string or integer - tag_rewrite_method - Tag rewrite method. Type: string - push_dot1q - True to push tags as Dot1q, False to push tags as Dot1ad (Optional). Type: boolean - tag1_id - VLAN tag1 ID (Optional). Type: integer - tag2_id - VLAN tag2 ID (Optional). Type: integer

Return:

  • No value returned

Example:

| L2 Tag Rewrite Method Is Set On Interface | ${nodes[‘DUT1’]} | 9 | pop-1 | | L2 Tag Rewrite Method Is Set On Interface | ${nodes[‘DUT2’]} | 10 | translate-1-2 | push_dot1q=${False} | tag1_id=10 | tag1_id=20 |


${result}=  Evaluate  isinstance($interface, int)
${interface_name}=  Run Keyword If  ${result}  Set Variable  ${interface}  ELSE  Get interface name  ${dut_node}  ${interface}
L2 Vlan Tag Rewrite  ${dut_node}  ${interface_name}  ${tag_rewrite_method}  push_dot1q=${push_dot1q}  tag1_id=${tag1_id}  tag2_id=${tag2_id}

testing_path module

Path for 2-node testing is set

Compute path for testing on two given nodes in circular topology and set corresponding test case variables.

Arguments: - ${tg_node} - TG node. Type: dictionary - ${dut_node} - DUT node. Type: dictionary - ${tg2_node} - Node where the path ends. Must be the same as TG node parameter in circular topology. Type: dictionary

Return: - No value returned

_NOTE:_ This KW sets following test case variables: - ${tg_node} - TG node. - ${tg_to_dut_if1} - 1st TG interface towards DUT. - ${tg_to_dut_if2} - 2nd TG interface towards DUT. - ${dut_node} - DUT node. - ${dut_to_tg_if1} - 1st DUT interface towards TG. - ${dut_to_tg_if2} - 2nd DUT interface towards TG. - ${tg_to_dut_if1_mac} - ${tg_to_dut_if2_mac} - ${dut_to_tg_if1_mac} - ${dut_to_tg_if2_mac}

Example:

| Given Path for 2-node testing is set | ${nodes[‘TG’]} | ${nodes[‘DUT1’]} | ${nodes[‘TG’]} |


Should Be Equal  ${tg_node}  ${tg2_node}
Append Nodes  ${tg_node}  ${dut_node}  ${tg_node}
Compute Path  always_same_link=${FALSE}
${tg_to_dut_if1}  ${tmp}=  First Interface
${tg_to_dut_if2}  ${tmp}=  Last Interface
${dut_to_tg_if1}  ${tmp}=  First Ingress Interface
${dut_to_tg_if2}  ${tmp}=  Last Egress Interface
${tg_to_dut_if1_mac}=  Get interface mac  ${tg_node}  ${tg_to_dut_if1}
${tg_to_dut_if2_mac}=  Get interface mac  ${tg_node}  ${tg_to_dut_if2}
${dut_to_tg_if1_mac}=  Get interface mac  ${dut_node}  ${dut_to_tg_if1}
${dut_to_tg_if2_mac}=  Get interface mac  ${dut_node}  ${dut_to_tg_if2}
Set Test Variable  ${tg_to_dut_if1}
Set Test Variable  ${tg_to_dut_if2}
Set Test Variable  ${dut_to_tg_if1}
Set Test Variable  ${dut_to_tg_if2}
Set Test Variable  ${tg_to_dut_if1_mac}
Set Test Variable  ${tg_to_dut_if2_mac}
Set Test Variable  ${dut_to_tg_if1_mac}
Set Test Variable  ${dut_to_tg_if2_mac}
Set Test Variable  ${tg_node}
Set Test Variable  ${dut_node}

Interfaces in 2-node path are up

Set UP state on interfaces in 2-node path on nodes and wait for all interfaces are ready. Requires more than one link between nodes.

Arguments: - No arguments.

Return: - No value returned.

_NOTE:_ This KW uses test variables sets in “Path for 2-node testing is set” KW.

Example:

| Given Path for 2-node testing is set | ${nodes[‘TG’]} | ${nodes[‘DUT1’]} | ${nodes[‘TG’]} | | And Interfaces in 2-node path are up |


Set Interface State  ${tg_node}  ${tg_to_dut_if1}  up
Set Interface State  ${tg_node}  ${tg_to_dut_if2}  up
Set Interface State  ${dut_node}  ${dut_to_tg_if1}  up
Set Interface State  ${dut_node}  ${dut_to_tg_if2}  up
Vpp Node Interfaces Ready Wait  ${dut_node}

Path for 3-node testing is set

Compute path for testing on three given nodes in circular topology and set corresponding test case variables.

Arguments: - ${tg_node} - TG node. Type: dictionary - ${dut1_node} - DUT1 node. Type: dictionary - ${dut2_node} - DUT2 node. Type: dictionary - ${tg2_node} - Node where the path ends. Must be the same as TG node parameter in circular topology. Type: dictionary

Return: - No value returned

_NOTE:_ This KW sets following test case variables: - ${tg_node} - TG node. - ${tg_to_dut1} - TG interface towards DUT1. - ${tg_to_dut2} - TG interface towards DUT2. - ${dut1_node} - DUT1 node. - ${dut1_to_tg} - DUT1 interface towards TG. - ${dut1_to_dut2} - DUT1 interface towards DUT2. - ${dut2_node} - DUT2 node. - ${dut2_to_tg} - DUT2 interface towards TG. - ${dut2_to_dut1} - DUT2 interface towards DUT1. - ${tg_to_dut1_mac} - ${tg_to_dut2_mac} - ${dut1_to_tg_mac} - ${dut1_to_dut2_mac} - ${dut2_to_tg_mac} - ${dut2_to_dut1_mac}

Example:

| Given Path for 3-node testing is set | ${nodes[‘TG’]} | ${nodes[‘DUT1’]} | ${nodes[‘DUT2’]} | ${nodes[‘TG’]} |


Should Be Equal  ${tg_node}  ${tg2_node}
Append Nodes  ${tg_node}  ${dut1_node}  ${dut2_node}  ${tg_node}
Compute Path
${tg_to_dut1}  ${tmp}=  Next Interface
${dut1_to_tg}  ${tmp}=  Next Interface
${dut1_to_dut2}  ${tmp}=  Next Interface
${dut2_to_dut1}  ${tmp}=  Next Interface
${dut2_to_tg}  ${tmp}=  Next Interface
${tg_to_dut2}  ${tmp}=  Next Interface
${tg_to_dut1_mac}=  Get interface mac  ${tg_node}  ${tg_to_dut1}
${tg_to_dut2_mac}=  Get interface mac  ${tg_node}  ${tg_to_dut2}
${dut1_to_tg_mac}=  Get interface mac  ${dut1_node}  ${dut1_to_tg}
${dut1_to_dut2_mac}=  Get interface mac  ${dut1_node}  ${dut1_to_dut2}
${dut2_to_tg_mac}=  Get interface mac  ${dut2_node}  ${dut2_to_tg}
${dut2_to_dut1_mac}=  Get interface mac  ${dut2_node}  ${dut2_to_dut1}
Set Test Variable  ${tg_to_dut1}
Set Test Variable  ${dut1_to_tg}
Set Test Variable  ${tg_to_dut2}
Set Test Variable  ${dut2_to_tg}
Set Test Variable  ${dut1_to_dut2}
Set Test Variable  ${dut2_to_dut1}
Set Test Variable  ${tg_to_dut1_mac}
Set Test Variable  ${tg_to_dut2_mac}
Set Test Variable  ${dut1_to_tg_mac}
Set Test Variable  ${dut1_to_dut2_mac}
Set Test Variable  ${dut2_to_tg_mac}
Set Test Variable  ${dut2_to_dut1_mac}
Set Test Variable  ${tg_node}
Set Test Variable  ${dut1_node}
Set Test Variable  ${dut2_node}

Interfaces in 3-node path are up

Set UP state on interfaces in 3-node path on nodes and wait until all interfaces are ready.

Arguments: - No arguments.

Return: - No value returned.

_NOTE:_ This KW uses test variables sets in “Path for 3-node testing is set” KW.

Example:

| Given Path for 3-node testing is set | ${nodes[‘TG’]} | ${nodes[‘DUT1’]} | ${nodes[‘TG’]} | | And Interfaces in 3-node path are up |


Set Interface State  ${tg_node}  ${tg_to_dut1}  up
Set Interface State  ${tg_node}  ${tg_to_dut2}  up
Set Interface State  ${dut1_node}  ${dut1_to_tg}  up
Set Interface State  ${dut1_node}  ${dut1_to_dut2}  up
Set Interface State  ${dut2_node}  ${dut2_to_tg}  up
Set Interface State  ${dut2_node}  ${dut2_to_dut1}  up
Vpp Node Interfaces Ready Wait  ${dut1_node}
Vpp Node Interfaces Ready Wait  ${dut2_node}

traffic module

Send Packet And Check Headers

Sends packet from IP (with source mac) to IP (with dest mac). There has to be 4 MAC addresses when using 2 node + xconnect (one for each eth).

Arguments:

_NOTE:_ Arguments are based on topology: TG(if1)->(if1)DUT(if2)->TG(if2)

  • tg_node - Node to execute scripts on (TG). Type: dictionary
  • src_ip - IP of source interface (TG-if1). Type: string
  • dst_ip - IP of destination interface (TG-if2). Type: string
  • tx_src_port - Interface of TG-if1. Type: string
  • tx_src_mac - MAC address of TG-if1. Type: string
  • tx_dst_mac - MAC address of DUT-if1. Type: string
  • rx_port - Interface of TG-if1. Type: string
  • rx_src_mac - MAC address of DUT1-if2. Type: string
  • rx_dst_mac - MAC address of TG-if2. Type: string

Return: - No value returned

Example:

| Send Packet And Check Headers | ${nodes[‘TG’]} | 10.0.0.1 | 32.0.0.1 | eth2 | 08:00:27:ee:fd:b3 | 08:00:27:a2:52:5b | eth3 | 08:00:27:4d:ca:7a | 08:00:27:7d:fd:10 |


${tx_port_name}=  Get interface name  ${tg_node}  ${tx_src_port}
${rx_port_name}=  Get interface name  ${tg_node}  ${rx_port}
${args}=  Catenate  --tg_src_mac  ${tx_src_mac}  --tg_dst_mac  ${rx_dst_mac}  --dut_if1_mac  ${tx_dst_mac}  --dut_if2_mac  ${rx_src_mac}  --src_ip  ${src_ip}  --dst_ip  ${dst_ip}  --tx_if  ${tx_port_name}  --rx_if  ${rx_port_name}
Run Traffic Script On Node  send_icmp_check_headers.py  ${tg_node}  ${args}

Send packet from Port to Port should failed

Sends packet from ip (with specified mac) to ip (with dest mac). Using keyword : Send packet And Check Headers and subsequently checks the return value

Arguments:

_NOTE:_ Arguments are based on topology: TG(if1)->(if1)DUT(if2)->TG(if2)

  • tg_node - Node to execute scripts on (TG). Type: dictionary
  • src_ip - IP of source interface (TG-if1). Type: string
  • dst_ip - IP of destination interface (TG-if2). Type: string
  • tx_src_port - Interface of TG-if1. Type: string
  • tx_src_mac - MAC address of TG-if1. Type: string
  • tx_dst_mac - MAC address of DUT-if1. Type: string
  • rx_port - Interface of TG-if1. Type: string
  • rx_src_mac - MAC address of DUT1-if2. Type: string
  • rx_dst_mac - MAC address of TG-if2. Type: string

Return: - No value returned

Example:

| Send packet from Port to Port should failed | ${nodes[‘TG’]} | 10.0.0.1 | 32.0.0.1 | eth2 | 08:00:27:ee:fd:b3 | 08:00:27:a2:52:5b | eth3 | 08:00:27:4d:ca:7a | 08:00:27:7d:fd:10 |


${tx_port_name}=  Get interface name  ${tg_node}  ${tx_src_port}
${rx_port_name}=  Get interface name  ${tg_node}  ${rx_port}
${args}=  Catenate  --tg_src_mac  ${tx_src_mac}  --tg_dst_mac  ${rx_dst_mac}  --dut_if1_mac  ${tx_dst_mac}  --dut_if2_mac  ${rx_src_mac}  --src_ip  ${src_ip}  --dst_ip  ${dst_ip}  --tx_if  ${tx_port_name}  --rx_if  ${rx_port_name}
Run Keyword And Expect Error  ICMP echo Rx timeout  Run Traffic Script On Node  send_icmp_check_headers.py  ${tg_node}  ${args}

Send Packet And Check ARP Request

Send IP packet from tx_port and check if ARP Requestpacket is received on rx_port.

Arguments:

_NOTE:_ Arguments are based on topology: TG(if1)->(if1)DUT(if2)->TG(if2)

  • tg_node - Node to execute scripts on (TG). Type: dictionary
  • tx_src_ip - Source IP address of transferred packet (TG-if1).

Type: string - tx_dst_ip - Destination IP address of transferred packet (TG-if2). Type: string - tx_port - Interface from which the IP packet is sent (TG-if1). Type: string - tx_dst_mac - Destination MAC address of IP packet (DUT-if1). Type: string - rx_port - Interface where the IP packet is received (TG-if2). Type: string - rx_src_mac - Source MAC address of ARP packet (DUT-if2). Type: string - rx_arp_src_ip - Source IP address of ARP packet (DUT-if2). Type: string - rx_arp_dst_ip - Destination IP address of ARP packet. Type: string

Return: - No value returned

Example:

| Send Packet And Check ARP Packet | ${nodes[‘TG’]} | 16.0.0.1 | 32.0.0.1 | eth2 | 08:00:27:cc:4f:54 | eth4 | 08:00:27:5b:49:dd | 192.168.2.1 | 192.168.2.2 |


${tx_port_name}=  Get interface name  ${tg_node}  ${tx_port}
${rx_port_name}=  Get interface name  ${tg_node}  ${rx_port}
${args}=  Catenate  --tx_dst_mac  ${tx_dst_mac}  --rx_src_mac  ${rx_src_mac}  --tx_src_ip  ${tx_src_ip}  --tx_dst_ip  ${tx_dst_ip}  --tx_if  ${tx_port_name}  --rx_if  ${rx_port_name}  --rx_arp_src_ip ${rx_arp_src_ip}  --rx_arp_dst_ip ${rx_arp_dst_ip}
Run Traffic Script On Node  send_icmp_check_arp.py  ${tg_node}  ${args}

Send TCP or UDP packet

Sends TCP or UDP packet with specified source and destination port.

Arguments:

_NOTE:_ Arguments are based on topology: TG(if1)->(if1)DUT(if2)->TG(if2)

  • tg_node - Node to execute scripts on (TG). Type: dictionary
  • src_ip - IP of source interface (TG-if1). Type: integer
  • dst_ip - IP of destination interface (TG-if2). Type: integer
  • tx_port - Source interface (TG-if1). Type: string
  • tx_mac - MAC address of source interface (TG-if1). Type: string
  • rx_port - Destionation interface (TG-if1). Type: string
  • rx_mac - MAC address of destination interface (TG-if1). Type: string
  • protocol - Type of protocol. Type: string
  • source_port - Source TCP/UDP port. Type: string or integer
  • destination_port - Destination TCP/UDP port. Type: string or integer

Return: - No value returned

Example:

| Send TCP or UDP packet | ${nodes[‘TG’]} | 16.0.0.1 | 32.0.0.1 | eth2 | 08:00:27:cc:4f:54 | eth4 | 08:00:27:c9:6a:d5 | TCP | 20 | 80 |


${tx_port_name}=  Get interface name  ${tg_node}  ${tx_port}
${rx_port_name}=  Get interface name  ${tg_node}  ${rx_port}
${args}=  Catenate  --tx_mac  ${tx_mac}  --rx_mac  ${rx_mac}  --src_ip  ${src_ip}  --dst_ip  ${dst_ip}  --tx_if  ${tx_port_name}  --rx_if  ${rx_port_name}  --protocol  ${protocol}  --source_port  ${source_port}  --destination_port  ${destination_port}
Run Traffic Script On Node  send_tcp_udp.py  ${tg_node}  ${args}

Send TCP or UDP packet should failed

Sends TCP or UDP packet with specified source and destination port.

Arguments:

_NOTE:_ Arguments are based on topology: TG(if1)->(if1)DUT(if2)->TG(if2)

  • tg_node - Node to execute scripts on (TG). Type: dictionary
  • src_ip - IP of source interface (TG-if1). Type: integer
  • dst_ip - IP of destination interface (TG-if2). Type: integer
  • tx_port - Source interface (TG-if1). Type: string
  • tx_mac - MAC address of source interface (TG-if1). Type: string
  • rx_port - Destionation interface (TG-if1). Type: string
  • rx_mac - MAC address of destination interface (TG-if1). Type: string
  • protocol - Type of protocol. Type: string
  • source_port - Source TCP/UDP port. Type: string or integer
  • destination_port - Destination TCP/UDP port. Type: string or integer

Return: - No value returned

Example:

| Send TCP or UDP packet should failed | ${nodes[‘TG’]} | 16.0.0.1 | 32.0.0.1 | eth2 | 08:00:27:cc:4f:54 | eth4 | 08:00:27:c9:6a:d5 | TCP | 20 | 80 |


${tx_port_name}=  Get interface name  ${tg_node}  ${tx_port}
${rx_port_name}=  Get interface name  ${tg_node}  ${rx_port}
${args}=  Catenate  --tx_mac  ${tx_mac}  --rx_mac  ${rx_mac}  --src_ip  ${src_ip}  --dst_ip  ${dst_ip}  --tx_if  ${tx_port_name}  --rx_if  ${rx_port_name}  --protocol  ${protocol}  --source_port  ${source_port}  --destination_port  ${destination_port}
Run Keyword And Expect Error  TCP/UDP Rx timeout  Run Traffic Script On Node  send_tcp_udp.py  ${tg_node}  ${args}

Receive And Check Router Advertisement Packet

Wait until RA packet is received and then verifyspecific fields of received RA packet.

Arguments:

  • node - Node where to check for RA packet. Type: dictionary
  • rx_port - Interface where the packet is received. Type: string
  • src_mac - MAC address of source interface from which the link-localIPv6 address is constructed and checked. Type: string
  • interval - Configured retransmit interval. Optional. Type: integer

Return: - No value returned

Example:

| Receive And Check Router Advertisement Packet | ${nodes[‘DUT1’]} | eth2 | 08:00:27:cc:4f:54 |


${rx_port_name}=  Get interface name  ${node}  ${rx_port}
${args}=  Catenate  --rx_if ${rx_port_name}  --src_mac ${src_mac}  --interval ${interval}
Run Traffic Script On Node  check_ra_packet.py  ${node}  ${args}

Send Router Solicitation and check response

Send RS packet, wait for response and then verifyspecific fields of received RA packet.

Arguments:

  • tg_node - TG node to send RS packet from. Type: dictionary
  • dut_node - DUT node to send RS packet to. Type: dictionary
  • rx_port - Interface where the packet is sent from. Type: string
  • tx_port - Interface where the packet is sent to. Type: string
  • src_ip - Source IP address of RS packet. Optional. If not provided,link local address will be used. Type: string

Return: - No value returned

Example:

| Send Router Solicitation and check response | ${nodes[‘TG’]} | ${nodes[‘DUT1’]} | eth2 | GigabitEthernet0/8/0 | 10::10 |


${src_mac}=  Get Interface Mac  ${tg_node}  ${tx_port}
${dst_mac}=  Get Interface Mac  ${dut_node}  ${rx_port}
${src_int_name}=  Get interface name  ${tg_node}  ${tx_port}
${dst_int_name}=  Get interface name  ${dut_node}  ${rx_port}
${args}=  catenate  --rx_if ${dst_int_name} --tx_if ${src_int_name}  --src_mac ${src_mac}  --dst_mac ${dst_mac}  --src_ip ${src_ip}
Run Traffic Script On Node  send_rs_check_ra.py  ${tg_node}  ${args}

Send ARP Request

Send ARP Request and check if the ARP Response is received.

Arguments:

_NOTE:_ Arguments are based on topology: TG(if1)<->(if1)DUT

  • tg_node - Node to execute scripts on (TG). Type: dictionary
  • tx_port - Interface from which the ARP packet is sent (TG-if1).

Type: string - src_mac - Source MAC address of ARP packet (TG-if1). Type: string - tgt_mac - Target MAC address which is expected in the response (DUT-if1). Type: string - src_ip - Source IP address of ARP packet (TG-if1). Type: string - tgt_ip - Target IP address of ARP packet (DUT-if1). Type: string

Return: - No value returned

Example:

| Send ARP Request | ${nodes[‘TG’]} | eth3 | 08:00:27:cc:4f:54 | 08:00:27:c9:6a:d5 | 10.0.0.100 | 192.168.1.5 |


${args}=  Catenate  --tx_if  ${tx_port}  --src_mac  ${src_mac}  --dst_mac  ${tgt_mac}  --src_ip  ${src_ip}  --dst_ip  ${tgt_ip}
Run Traffic Script On Node  arp_request.py  ${tg_node}  ${args}

Send ARP Request should failed

Send ARP Request and the ARP Response should not be received.

Arguments:

_NOTE:_ Arguments are based on topology: TG(if1)<->(if1)DUT

  • tg_node - Node to execute scripts on (TG). Type: dictionary
  • tx_port - Interface from which the ARP packet is sent (TG-if1).

Type: string - src_mac - Source MAC address of ARP packet (TG-if1). Type: string - tgt_mac - Target MAC address which is expected in the response (DUT-if1). Type: string - src_ip - Source IP address of ARP packet (TG-if1). Type: string - tgt_ip - Target IP address of ARP packet (DUT-if1). Type: string

Return: - No value returned

Example:

| Send ARP Request should failed | ${nodes[‘TG’]} | eth3 | 08:00:27:cc:4f:54 | 08:00:27:c9:6a:d5 | 10.0.0.100 | 192.168.1.5 |


${args}=  Catenate  --tx_if  ${tx_port}  --src_mac  ${src_mac}  --dst_mac  ${tgt_mac}  --src_ip  ${src_ip}  --dst_ip  ${tgt_ip}
Run Keyword And Expect Error  ARP reply timeout  Run Traffic Script On Node  arp_request.py  ${tg_node}  ${args}

Send Packets And Check Multipath Routing

Send 100 IP ICMP packets traffic and check if it isdivided into two paths.

Arguments:

_NOTE:_ Arguments are based on topology: TG(if1)->(if1)DUT(if2)->TG(if2)

  • tg_node - Node to execute scripts on (TG). Type: dictionary
  • src_port - Interface of TG-if1. Type: string
  • dst_port - Interface of TG-if2. Type: string
  • src_ip - IP of source interface (TG-if1). Type: string
  • dst_ip - IP of destination interface (TG-if2). Type: string
  • tx_src_mac - MAC address of TG-if1. Type: string
  • tx_dst_mac - MAC address of DUT-if1. Type: string
  • rx_src_mac - MAC address of DUT-if2. Type: string
  • rx_dst_mac_1 - MAC address of interface for path 1. Type: string
  • rx_dst_mac_2 - MAC address of interface for path 2. Type: string

Return: - No value returned

Example:

| Send Packet And Check Multipath Routing | ${nodes[‘TG’]} | eth2 | eth3 | 16.0.0.1 | 32.0.0.1 | 08:00:27:cc:4f:54 | 08:00:27:c9:6a:d5 | 08:00:27:54:59:f9 | 02:00:00:00:00:02 | 02:00:00:00:00:03 |


${src_port_name}=  Get interface name  ${tg_node}  ${src_port}
${dst_port_name}=  Get interface name  ${tg_node}  ${dst_port}
${args}=  Catenate  --tx_if  ${src_port_name}  --rx_if  ${dst_port_name}  --src_ip  ${src_ip}  --dst_ip  ${dst_ip}  --tg_if1_mac  ${tx_src_mac}  --dut_if1_mac  ${tx_dst_mac}  --dut_if2_mac  ${rx_src_mac}  --path_1_mac  ${rx_dst_mac_1}  --path_2_mac  ${rx_dst_mac_2}
Run Traffic Script On Node  send_icmp_check_multipath.py  ${tg_node}  ${args}

vrf module

Setup VRF on DUT

The keyword sets a FIB table on a DUT, assigns two interfaces to it,adds two ARP items and a route, see example.

Arguments - node - DUT node. Type: dictionary - table - FIB table ID. Type: integer - route_interface - Destination interface to be assigned to FIB.Type: string - route_gateway_ip - Route gateway IP address. Type: string - route_gateway_mac - Route gateway MAC address. Type string - route_dst_ip - Route destination IP. Type: string - vrf_src_if - Source interface to be assigned to FIB. Type: string - src_if_ip - IP address of the source interface. Type: string - src_if_mac - MAC address of the source interface. Type: string - prefix_len - Prefix length. Type: int

Example: Three-node topology: TG_if1 - DUT1_if1-DUT1_if2 - DUT2_if1-DUT2_if2 - TG_if2 Create one VRF on each DUT: | Setup VRF on DUT | ${dut1_node} | ${dut1_fib_table} | ${dut1_to_dut2} | ${dut2_to_dut1_ip4} | ${dut2_to_dut1_mac} | ${tg2_ip4} | ${dut1_to_tg} | ${tg1_ip4} | ${tg_to_dut1_mac} | 24 | | Setup VRF on DUT | ${dut2_node} | ${dut2_fib_table} | ${dut2_to_dut1} | ${dut1_to_dut2_ip4} | ${dut1_to_dut2_mac} | ${tg1_ip4} | ${dut2_to_tg} | ${tg2_ip4} | ${tg_to_dut2_mac} | 24 |


${route_interface_idx}=  Get Interface SW Index  ${node}  ${route_interface}
Add fib table  ${node}  ${route_dst_ip}  ${prefix_len}  ${table}  via ${route_gateway_ip} sw_if_index ${route_interface_idx} multipath
Assign Interface To Fib Table  ${node}  ${route_interface}  ${table}
Assign Interface To Fib Table  ${node}  ${vrf_src_if}  ${table}
Add IP Neighbor  ${node}  ${vrf_src_if}  ${src_if_ip}  ${src_if_mac}  vrf=${table}
Add IP Neighbor  ${node}  ${route_interface}  ${route_gateway_ip}  ${route_gateway_mac}  vrf=${table}
Vpp Route Add  ${node}  ${route_dst_ip}  ${prefix_len}  ${route_gateway_ip}  ${route_interface}  vrf=${table}

vxlan module

IP addresses are set on interfaces

Set IPv4 addresses on interfaces on DUTs. If interface index is None then is determines with Get Interface Sw Index in this case it is required the interface to be present in topology dict. It also executes VPP IP Probe to determine MACs to IPs on DUTs

_Set testcase variables with IP addresses and prefix length:_ - ${dut1s_ip_address} - ${dut2s_ip_address} - ${duts_ip_address_prefix}


Set Test Variable  ${dut1s_ip_address}  172.16.0.1
Set Test Variable  ${dut2s_ip_address}  172.16.0.2
Set Test Variable  ${duts_ip_address_prefix}  24
${DUT1_INT_KEY}=  Run Keyword If  ${DUT1_INT_INDEX} is None  Get Interface by name  ${DUT1}  ${DUT1_INT_NAME}
${DUT2_INT_KEY}=  Run Keyword If  ${DUT2_INT_INDEX} is None  Get Interface by name  ${DUT2}  ${DUT2_INT_NAME}
${DUT1_INT_INDEX}=  Run Keyword If  ${DUT1_INT_INDEX} is None  Get Interface Sw Index  ${DUT1}  ${DUT1_INT_KEY}  ELSE  Set Variable  ${DUT1_INT_INDEX}
${DUT2_INT_INDEX}=  Run Keyword If  ${DUT2_INT_INDEX} is None  Get Interface Sw Index  ${DUT2}  ${DUT2_INT_KEY}  ELSE  Set Variable  ${DUT2_INT_INDEX}
${DUT1_INT_MAC}=  Vpp Get Interface Mac  ${DUT1}  ${DUT1_INT_INDEX}
${DUT2_INT_MAC}=  Vpp Get Interface Mac  ${DUT2}  ${DUT2_INT_INDEX}
Set Interface Address  ${DUT1}  ${DUT1_INT_INDEX}  ${dut1s_ip_address}  ${duts_ip_address_prefix}
Set Interface Address  ${DUT2}  ${DUT2_INT_INDEX}  ${dut2s_ip_address}  ${duts_ip_address_prefix}
Add IP Neighbor  ${DUT1}  ${DUT1_INT_INDEX}  ${dut2s_ip_address}  ${DUT2_INT_MAC}
Add IP Neighbor  ${DUT2}  ${DUT2_INT_INDEX}  ${dut1s_ip_address}  ${DUT1_INT_MAC}

VXLAN interface is created


Create VXLAN interface  ${DUT}  ${VNI}  ${SRC_IP}  ${DST_IP}

Interfaces are added to BD


Vpp Add L2 Bridge Domain  ${DUT}  ${BID}  ${INTERFACE_1}  ${INTERFACE_2}

Interfaces are added to xconnect


L2 setup xconnect on DUT  ${DUT}  ${INTERFACE_1}  ${INTERFACE_2}

Vlan interfaces for VXLAN are created

Create VLAN subinterface on interfaces on DUTs with given VLAN ID.

_Set testcase variables with name and index of created interfaces:_ - ${dut1s_vlan_name} - ${dut1s_vlan_index} - ${dut2s_vlan_name} - ${dut2s_vlan_index}


${INT1_NAME}=  Get interface name  ${DUT1}  ${INT1}
${INT2_NAME}=  Get interface name  ${DUT2}  ${INT2}
${dut1s_vlan_name}  ${dut1s_vlan_index}=  Create Vlan Subinterface  ${DUT1}  ${INT1_NAME}  ${VLAN}
${dut2s_vlan_name}  ${dut2s_vlan_index}=  Create Vlan Subinterface  ${DUT2}  ${INT2_NAME}  ${VLAN}
Set Interface State  ${DUT1}  ${dut1s_vlan_index}  up
Set Interface State  ${DUT2}  ${dut2s_vlan_index}  up
Set Test Variable  ${dut1s_vlan_name}
Set Test Variable  ${dut1s_vlan_index}
Set Test Variable  ${dut2s_vlan_name}
Set Test Variable  ${dut2s_vlan_index}