Running Unit Tests

How to run s3-tests locally

RGW code can be tested by building Ceph locally from source, starting a vstart cluster, and running the “s3-tests” suite against it.

The following instructions should work on jewel and above.

Step 1 - build Ceph

Refer to Build Ceph.

You can do step 2 separately while it is building.

Step 2 - vstart

When the build completes, and still in the top-level directory of the git clone where you built Ceph, do the following, for cmake builds:

cd build/
RGW=1 ../src/vstart.sh -n

This will produce a lot of output as the vstart cluster is started up. At the end you should see a message like:

started.  stop.sh to stop.  see out/* (e.g. 'tail -f out/????') for debug output.

This means the cluster is running.

Step 3 - run s3-tests

To run the s3tests suite do the following:

$ ../qa/workunits/rgw/run-s3tests.sh

Running test using vstart_runner.py

CephFS and Ceph Manager code is be tested using vstart_runner.py.

Running your first test

The Python tests in Ceph repository can be executed on your local machine using vstart_runner.py. To do that, you’d need teuthology installed:

$ virtualenv --python=python3 venv
$ source venv/bin/activate
$ pip install 'setuptools >= 12'
$ pip install git+https://github.com/ceph/teuthology#egg=teuthology[test]
$ deactivate

The above steps installs teuthology in a virtual environment. Before running a test locally, build Ceph successfully from the source (refer Build Ceph) and do:

$ cd build
$ ../src/vstart.sh -n -d -l
$ source ~/path/to/teuthology/venv/bin/activate

To run a specific test, say test_reconnect_timeout from TestClientRecovery in qa/tasks/cephfs/test_client_recovery, you can do:

$ python ../qa/tasks/vstart_runner.py tasks.cephfs.test_client_recovery.TestClientRecovery.test_reconnect_timeout

The above command runs vstart_runner.py and passes the test to be executed as an argument to vstart_runner.py. In a similar way, you can also run the group of tests in the following manner:

$ # run all tests in class TestClientRecovery
$ python ../qa/tasks/vstart_runner.py tasks.cephfs.test_client_recovery.TestClientRecovery
$ # run  all tests in test_client_recovery.py
$ python ../qa/tasks/vstart_runner.py tasks.cephfs.test_client_recovery

Based on the argument passed, vstart_runner.py collects tests and executes as it would execute a single test.

vstart_runner.py can take the following options -

--clear-old-log

deletes old log file before running the test

--create

create Ceph cluster before running a test

--create-cluster-only

creates the cluster and quits; tests can be issued later

--interactive

drops a Python shell when a test fails

--log-ps-output

logs ps output; might be useful while debugging

--teardown

tears Ceph cluster down after test(s) has finished runnng

--kclient

use the kernel cephfs client instead of FUSE

--brxnet=<net/mask>

specify a new net/mask for the mount clients’ network namespace container (Default: 192.168.0.0/16)

Note

If using the FUSE client, ensure that the fuse package is installed and enabled on the system and that user_allow_other is added to /etc/fuse.conf.

Note

If using the kernel client, the user must have the ability to run commands with passwordless sudo access. A failure on the kernel client may crash the host, so it’s recommended to use this functionality within a virtual machine.

Internal working of vstart_runner.py -

vstart_runner.py primarily does three things -

  • collects and runs the tests

    vstart_runner.py setups/teardowns the cluster and collects and runs the test. This is implemented using methods scan_tests(), load_tests() and exec_test(). This is where all the options that vstart_runner.py takes are implemented along with other features like logging and copying the traceback to the bottom of the log.

  • provides an interface for issuing and testing shell commands

    The tests are written assuming that the cluster exists on remote machines. vstart_runner.py provides an interface to run the same tests with the cluster that exists within the local machine. This is done using the class LocalRemote. Class LocalRemoteProcess can manage the process that executes the commands from LocalRemote, class LocalDaemon provides an interface to handle Ceph daemons and class LocalFuseMount can create and handle FUSE mounts.

  • provides an interface to operate Ceph cluster

    LocalCephManager provides methods to run Ceph cluster commands with and without admin socket and LocalCephCluster provides methods to set or clear ceph.conf.