Test Suite: Difference between revisions

From Libreswan
Jump to navigation Jump to search
(strip off KVM stuff)
No edit summary
(8 intermediate revisions by 2 users not shown)
Line 1: Line 1:
== Test Suite ==


stuff goes here explaining how the testsuite works ...


{{ ambox | nocat=true | type=important | text = libvirt 0.9.11 and qemu 1.0 or better are required. RHEL does not support a writable 9p filesystem, so the recommended host/guest OS is Fedora 22 }}
== Running tests ==


[[File:testnet.png]]
It can be run using several different mechanisms:


== Test Frameworks ==
* [[Test Suite - Namespace]]<br/>fast but linux centric, use host kernel.


This page is an overview of libreswan's testsuite.
* [[Test Suite - KVM]]<br/>generic, multi-guest-os, but slower


It can be run using several different frameworks.
* [[Test Suite - Docker]] <br/>Linux centric using host kernel. Ideal for build tests. Can build using various Linux Distributions : CentOS 6, 7, 8, Fedora 28 - rawhide, Debian, Ubuntu. Also for run tests using systemd.


The recommended framework is [[Test Suite - Docker]]
== Travis continuous integration ==


Instead of using virtual machines, it is possible to use [[Test Suite - Docker]]
* [https://travis-ci.org/github/libreswan Main branch on Fedora]
* [https://travis-ci.org/github/antonyantony/libreswan/branches More branches]


== Run an individual test (or tests) ==
== Coverity static analysis - manually updated ==
 
* [https://scan.coverity.com/projects/antonyantony-libreswan/view_defects Coverity]
All the test cases involving VMs are located in the libreswan directory under testing/pluto/ . The most basic test case is called basic-pluto-01. Each test case consists of a few files:
 
* description.txt to explain what this test case actually tests
* ipsec.conf files - for host west is called west.conf. This can also include configuration files for strongswan or racoon2 for interop testig
* ipsec.secret files - if non-default configurations are used. also uses the host syntax, eg west.secrets, east.secrets.
* An init.sh file for each VM that needs to start (eg westinit.sh, eastinit.sh, etc)
* One run.sh file for the host that is the initiator (eg westrun.sh)
* Known good (sanitized) output for each VM (eg west.console.txt, east.console.txt)
* testparams.sh if there are any non-default test parameters
 
You can run this test case by issuing the following command on the host:
 
Either:
 
<pre>
make kvm-test KVM_TESTS+=testing/pluto/basic-pluto-01/
</pre>
 
or:
 
<pre>
./testing/utils/kvmtest.py testing/pluto/basic-pluto-01
</pre>
 
multiple tests can be selected with:
 
<pre>
make kvm-test KVM_TESTS+=testing/pluto/basic-pluto-*
</pre>
 
or:
 
<pre>
./testing/utils/kvmresults.py testing/pluto/basic-pluto-*
</pre>
 
Once the test run has completed, you will see an OUTPUT/ directory in the test case directory:
 
<pre>
$ ls OUTPUT/
east.console.diff  east.console.verbose.txt  RESULT      west.console.txt          west.pluto.log
east.console.txt  east.pluto.log            swan12.pcap  west.console.diff  west.console.verbose.txt
</pre>
 
* RESULT is a text file (whose format is sure to change in the next few months) stating whether the test succeeded or failed.
* The diff files show the differences between this testrun and the last known good output.
* Each VM's serial (sanitized) console log  (eg west.console.txt)
* Each VM's unsanitized verbose console output (eg west.console.verbose.txt)
* A network capture from the bridge device (eg swan12.pcap)
* Each VM's pluto log, created with plutodebug=all (eg west.pluto.log)
* Any core dumps generated if a pluto daemon crashed
 
== Debugging inside the VM ==
 
=== Debugging pluto on east ===
 
Terminal 1 - east: log into east, start pluto, and attach gdb
 
<pre>
make kvmsh-east
east# cd /testing/pluto/basic-pluto-01
east# sh -x ./eastinit.sh
east# gdb /usr/local/libexec/ipsec/pluto $(pidof pluto)
(gdb) c
</pre>
 
Terminal 2 - west: log into west, start pluto and the test
 
<pre>
make kvmsh-west
west# sh -x ./westinit.sh ; sh -x westrun.sh
</pre>
If pluto wasn't running, gdb would complain: ''<code>--p requires an argument</code>''
 
When pluto crashes, gdb will show that and await commands.  For example, the bt command will show a backtrace.
 
=== Debugging pluto on west ===
 
See above, but also use virt as a terminal.
 
=== /root/.gdbinit ===
 
If you want to get rid of the warning "warning: File "/testing/pluto/ikev2-dpd-01/.gdbinit" auto-loading has been declined by your `auto-load safe-path'"
 
<pre>
echo "set auto-load safe-path /" >> /root/.gdbinit
</pre>
 
=== swan-transmogrify ===
 
When the VMs were installed, an XML configuration file from testing/libvirt/vm/ was used to configure each VM with the right disks, mounts and nic cards. Each VM mounts the libreswan directory as /source and the libreswan/testing/ directory as /testing . This makes the /testing/guestbin/ directory available on the VMs. At boot, the VMs run /testing/guestbin/swan-transmogrify. This python script compares the nic of eth0 with the list of known MAC addresses from the XML files. By identifying the MAC, it knows which identity (west, east, etc) it should take on. Files are copied from /testing/baseconfigs/ into the VM's /etc directory and the network service is restarted.
 
=== swan-build, swans-install, swan-update ===
 
These commands are used to build, install or build+install (update) the libreswan userland and kernel code
 
=== swan-prep ===
 
This command is run as the first command of each test case to setup the host. It copies the required files from /testing/baseconfigs/ and the specific test case files onto the VM test machine. It does not start libreswan. That is done in the "init.sh" script.
 
The swan-prep command takes two options.
The --x509 option is required to copy in all the required certificates and update the NSS database.
The --46 /--6 option is used to give the host IPv4 and/or IPv6 connectivity. Hosts per default only get IPv4 connectivity as this reduces the noise captured with tcpdump
 
=== fipson and fipsoff ===
 
These are used to fake a kernel into FIPS mode, which is required for some of the tests.
 
 
== Various notes ==
 
* Currently, only one test can run at a time.
* You can peek at the guests using virt-manager or you can ssh into the test machines from the host.
* ssh may be slow to prompt for the password.  If so, start up the vm "nic"
* On VMs use only one CPU core. Multiple CPUs may cause pexpect to mangle output.
* 2014 Mar: DHR needed to do the following to make things work each time he rebooted the host
<pre>
$ sudo setenforce Permissive
$ ls -ld /var/lib/libvirt/qemu
drwxr-x---. 6 qemu qemu 4096 Mar 14 01:23 /var/lib/libvirt/qemu
$ sudo chmod g+w /var/lib/libvirt/qemu
$ ( cd testing/libvirt/net ; for i in * ; do sudo virsh net-start $i ; done ; )
</pre>
* to make the SELinux enforcement change persist across host reboots, edit /etc/selinux/config
* to remove "169.254.0.0/16 dev eth0  scope link  metric 1002" from "ipsec status output"
<pre> echo 'NOZEROCONF=1' >> /etc/sysconfig/network </pre>
== To improve ==
* install and remove RPM using swantest + make rpm support
* add summarizing script that generate html/json to git repo
* cordump. It has been a mystery :) systemd or some daemon appears to block coredump on the Fedora 20 systems.
* when running multiple tests from TESTLIST shutdown the hosts before copying OUTPUT dir. This way we get leak detect inf. However, for single test runs do not shut down.
 
== IPv6 tests ==
IPv6 test cases seems to work better when IPv6 is disabled on the KVM bridge interfaces the VMs use. The bridges are swanXX and their config files are /etc/libvirt/qemu/networks/192_0_1.xml . Remove the following line from it. Reboot/restart libvirt.
 
<pre>
libvirt/qemu/networks/192_0_1.xml
 
<ip family="ipv6" address="2001:db8:0:1::253" prefix="64"/>
 
</pre>
 
and ifconfig swan01 should have no IPv6 address, no fe:80 or any v6 address. Then the v6 testcases should work. 
 
<br> please give me feedback if this hack work for you. I shall try to add more info about this.
 
== Sanitizers ==
 
* summarize output from tcpdump
* count established IKE, ESP , AH states (there is count at the end of "ipsec status " that is not accurate. It counts instantiated connection as loaded.
 
* dpd ping sanitizer. DPD tests have unpredictable packet loss for ping.

Revision as of 19:19, 20 September 2020

Test Suite

stuff goes here explaining how the testsuite works ...

Running tests

It can be run using several different mechanisms:

  • Test Suite - Docker
    Linux centric using host kernel. Ideal for build tests. Can build using various Linux Distributions : CentOS 6, 7, 8, Fedora 28 - rawhide, Debian, Ubuntu. Also for run tests using systemd.

Travis continuous integration

Coverity static analysis - manually updated