Running Unit Tests

How to run s3-tests locally

RGW code can be tested by building Ceph locally from source, starting a vstart cluster, and running the “s3-tests” suite against it.

The following instructions should work on jewel and above.

Step 1 - build Ceph

Refer to Build Ceph.

You can do step 2 separately while it is building.

Step 2 - vstart

When the build completes, and still in the top-level directory of the git clone where you built Ceph, do the following, for cmake builds:

cd build/
RGW=1 ../src/vstart.sh -n

This will produce a lot of output as the vstart cluster is started up. At the end you should see a message like:

started.  stop.sh to stop.  see out/* (e.g. 'tail -f out/????') for debug output.

This means the cluster is running.

Step 3 - run s3-tests

To run the s3tests suite do the following:

$ ../qa/workunits/rgw/run-s3tests.sh

Running test using vstart_runner.py

CephFS and Ceph Manager code is be tested using vstart_runner.py.

Running your first test

The Python tests in Ceph repository can be executed on your local machine using vstart_runner.py. To do that, you’d need teuthology installed:

$ git clone https://github.com/ceph/teuthology
$ cd teuthology
$ ./bootstrap install

This will create a virtual environment named virtualenv in root of the teuthology repository and install teuthology in it.

You can also install teuthology via pip if you would like to install it in a custom virtual environment with clone teuthology repository using git:

$ virtualenv --python=python3 venv
$ source venv/bin/activate
$ pip install 'setuptools >= 12'
$ pip install teuthology[test]@git+https://github.com/ceph/teuthology
$ deactivate

If for some unforeseen reason above approaches do no work (maybe boostrap script doesn’t work due to a bug or you can’t download tethology at the moment) teuthology can be installed manually manually from copy of teuthology repo already present on your machine:

$ cd teuthology
$ virtualenv -p python3 venv
$ source venv/bin/activate
$ pip install -r requirements.txt
$ pip install .
$ deactivate

The above steps installs teuthology in a virtual environment. Before running a test locally, build Ceph successfully from the source (refer Build Ceph) and do:

$ cd build
$ ../src/vstart.sh -n -d -l
$ source ~/path/to/teuthology/venv/bin/activate

To run a specific test, say test_reconnect_timeout from TestClientRecovery in qa/tasks/cephfs/test_client_recovery, you can do:

$ python ../qa/tasks/vstart_runner.py tasks.cephfs.test_client_recovery.TestClientRecovery.test_reconnect_timeout

The above command runs vstart_runner.py and passes the test to be executed as an argument to vstart_runner.py. In a similar way, you can also run the group of tests in the following manner:

$ # run all tests in class TestClientRecovery
$ python ../qa/tasks/vstart_runner.py tasks.cephfs.test_client_recovery.TestClientRecovery
$ # run  all tests in test_client_recovery.py
$ python ../qa/tasks/vstart_runner.py tasks.cephfs.test_client_recovery

Based on the argument passed, vstart_runner.py collects tests and executes as it would execute a single test.

vstart_runner.py can take the following options -

--clear-old-log

deletes old log file before running the test

--create

create Ceph cluster before running a test

--create-cluster-only

creates the cluster and quits; tests can be issued later

--interactive

drops a Python shell when a test fails

--log-ps-output

logs ps output; might be useful while debugging

--teardown

tears Ceph cluster down after test(s) has finished running

--kclient

use the kernel cephfs client instead of FUSE

--brxnet=<net/mask>

specify a new net/mask for the mount clients’ network namespace container (Default: 192.168.0.0/16)

Note

If using the FUSE client, ensure that the fuse package is installed and enabled on the system and that user_allow_other is added to /etc/fuse.conf.

Note

If using the kernel client, the user must have the ability to run commands with passwordless sudo access.

Note

A failure on the kernel client may crash the host, so it’s recommended to use this functionality within a virtual machine.

Internal working of vstart_runner.py -

vstart_runner.py primarily does three things -

  • collects and runs the tests

    vstart_runner.py setups/teardowns the cluster and collects and runs the test. This is implemented using methods scan_tests(), load_tests() and exec_test(). This is where all the options that vstart_runner.py takes are implemented along with other features like logging and copying the traceback to the bottom of the log.

  • provides an interface for issuing and testing shell commands

    The tests are written assuming that the cluster exists on remote machines. vstart_runner.py provides an interface to run the same tests with the cluster that exists within the local machine. This is done using the class LocalRemote. Class LocalRemoteProcess can manage the process that executes the commands from LocalRemote, class LocalDaemon provides an interface to handle Ceph daemons and class LocalFuseMount can create and handle FUSE mounts.

  • provides an interface to operate Ceph cluster

    LocalCephManager provides methods to run Ceph cluster commands with and without admin socket and LocalCephCluster provides methods to set or clear ceph.conf.

Note

vstart_runner.py deletes “adjust-ulimits” and “ceph-coverage” from the command arguments unconditionally since they are not applicable when tests are run on a developer’s machine.

Note

“omit_sudo” is re-set to False unconditionally in cases of commands “passwd” and “chown”.

Note

The presence of binary file named after the first argument is checked in <ceph-repo-root>/build/bin/. If present, the first argument is replaced with the path to binary file.

Running Workunits Using vstart_enviroment.sh

Code can be tested by building Ceph locally from source, starting a vstart cluster, and running any suite against it. Similar to S3-Tests, other workunits can be run against by configuring your environment.

Set up the environment

Configure your environment:

$ . ./build/vstart_enviroment.sh

Running a test

To run a workunit (e.g mon/osd.sh) do the following:

$ ./qa/workunits/mon/osd.sh