Test Suite - Namespace

From Libreswan
Jump to: navigation, search

This is a quick guide to run libreswan tests using namespaces. Note that the commands below install a lot of development packages. It is recommended to use a user that can use sudo without password prompting. Note the page currently describes testing on rpm based systems, such as Fedora, RHEL and CentOS.

Prepare host for libreswan testing

The namespace based tests can be run on a real machine (server, laptop) or in a virtual machine (kvm,libvirt,qemu etc). Since namespace tests can run in parallel, having more CPU's will allow you to run more tests in parallel.

sudo without password

To run sudo without password, your user needs to be in the wheel group. Ensure you have the following line enabled in /etc/sudoers

# /etc/sudoers

Check if you can run sudo without password prompt:

sudo bash -c true

If this command asks for a password, check /etc/sudoers

install testrun dependencies

On fedora, no additional repositories are needed. On CentOS/RHEL, you might need to enable the EPEL repository. Run the following command to install all dependencies:

sudo make install-testing-rpm-dep

testing an rpm package or /usr/local install

First, clone the libreswan repository even if you are going to test a libreswan rpm. We will use this to get the latest testing infrastructure and the latest available tests.

git clone https://github.com/libreswan/libreswan libreswan.git
cd libreswan.git

There are two ways to test a specific libreswan build. One method is to install the libreswan rpm on the host. The other method is to install libreswan in /usr/local. Note that you cannot have both an rpm and a /usr/local install, as these will conflict. The testing infrastructure will detect this an error out.

installing the libreswan rpm to test

if you are testing a libreswan rpm, ensure it is installed. If you want gdb backtrace support in case a test run causes libreswan to crash, also install the source/debuginfo libreswan rpms that belong to the binary rpm.

installing the libreswan from source to test

If you are testing from a libreswan git directory, first ensure no rpm is installed, then install libreswan:

rpm -e libreswan
make clean
make install
# if your are running on a machine with SElinux enable, also run the following:
restorecon -Rv /usr/local/sbin/ipsec /usr/local/libexec/ipsec

This is convenient when working on code and quickly installing some new code for a quick test. Note that any changes to code that are not committed are still being used when running make install

Creating custom libreswan rpm for testing

You can get the best of both worlds by using the libreswan.git directory to write code, and then create and install this code based via an rpm. Be sure to commit your changes as uncommitted will not be packaged into the rpm. You can use one commit and whenever making more changes, run "git commit -a --amend". Once your code is committed, run:

make rpm
# use the rpm version created by the above command
rpm -Uhv libreswan-3.31rc827_gc9aa82b8a6_master.x86_64.rpm

Generating X.509 certificates and DNSSEC zones =

Some tests use certificates or DNSSEC zones and these must be created once:


general setup of tests


Running a single test

All the tests can be found in the testing/pluto directory. Each test has its own directory. It will contain ipsec configuration files, shell scripts to direct the IPsec nodes being tested and the reference output (the "known good output"). Once a test is run, there will be an OUTPUT directory containing the full pluto logfiles, and the console output of each IPsec node (eg east.console.txt). Since there is some always changing output, such as dates, time, random numbers etc, each console output is "sanitized". The unsanitized output is available as east.console.verbose.txt

If there was a difference with the reference output, you will also see the "diff" files against the reference output so you can quickly see what happened. If there was a crash, core dumps will appear in the OUTPUT directory as well, and a gdb backtrace will appear in the console output text file.

To run a single test, for example the basic-pluto-01 test, issue:

cd testing/pluto/basic-pluto-01
../../utils/nsrun --ns

You will be told if the test passed or failed, and all output files will be in the OUTPUT directory.

Note: There seems to be a bug that prevents us from creating the namespace occasionally. If you see a python error about creating a namespace, just ignore it and rerun the test. We hope to fix this issue in the near future.

Doing a partial or full test run

A list of available tests are located in the file testing/pluto/TESTLIST. Tests marked with "good" should pass. Tests marked with "wip" are Work In Progress. To run a (partial/full) test run, issue:

make nsrun

Note: we are working on extending support to all tests. Some of these are not complete yet.

Test results

When doing a full testrun, the test results can be published on a website. This is not yet working for namespace based testing. To see the libvirt/kvm based website results of full test runs, see [testing.libreswan.org testing.libreswan.org].

unsupported tests

As of 2019 fall, several tests are hard to run in namespaces. Some can possibly be made to run, with varying amount of effort and motivation (patches welcome!). The KLIPS tests should be ignored as these use an alternative kernel stack that will be completely removed from libreswan in version 3.31.

  • SELINUX testing (possible). It need more attention and work in theory. Though won't co-exist with SELinux enabled and disabled.
  • audit tests (possible) : auditd and kernel messages are all going to the host audit log outside the namespace. It will require some work to filter these out from the regular host audit log.
  • FIPS tests and non fips tests at the same time (impossible?)

We do not know yet if we can turn on (or fake) FIPS mode within a namespace to test the userland part of FIPS. Obviously, the kernel can only be in FIPS or non-FIPS mode and we cannot run tests where the nodes require to be in different FIPS modes.

  • Ignored tests: "audit", "dnsoe", "fips", "ipseckey", "dnssec", "interop", "klips", "ocsp", "seccomp", "strongswan"

The tests with above above works in name are ignored by nsrun. Some of these tests likely can run using namespaces. Some of these tests need extra software, such as strongswan -- default install of strongswan rpm will start with systemd. We need tricks make strongswan, unbound, nsd, ocsp etc to work. Again, patches are welcome :)

  • testing kernel netlink messages or kernel crash. tests that cause kernel crash, bug or warn that will break the testrun.
  • xfrmi (possible) there seems some issues when the devices are not cleaned up. You would need to reboot after a test.
  • testing libreswan code that involve systemd and/or NetworkManager (hard to impossible). This requires extended namespaces running a lot more than just libreswan, and the effort might not be worth it compared to just using the existing libvirt/kvm/qemu based testing for those tests.

future ideas

  • tracking coredumps.

Currently when there is crash of pluto, addcon or whack, there will be a coredump. However in namespaces these coredumps are mixed up and it is hard to track cores by east or west or of another test case. One idea is strictly track the pid. When we start whack we have follow the pid and track it.

  • testing with different version. In theory this should be easy however need more work. One idea is bindmount the "libexec/ipsec" directory.

Debugging a test

Sometimes it can be useful to have a shell within the namespace to do some manual commands, such as "ipsec status". You can enter the namespace of a node (east, west, road etc) of a specific test (eg basicpluto-01, ikev2-04-basic-x509) using this macro in your .bashrc file:

    nsargs="--mount=/run/mountns/${ns} --net=/run/netns/${ns} --uts=/run/utsns/${ns}";
    NSENTER_CMD="/usr/bin/nsenter ${nsargs} ";
    sudo ${NSENTER_CMD} /bin/bash

to enter the east namespace of the completed test basic-pluto-01, run:

NSENTER east-basic-pluto-01

Details of Namespace testing