Running Unit Tests¶
How to run s3-tests locally¶
RGW code can be tested by building Ceph locally from source, starting a vstart cluster, and running the “s3-tests” suite against it.
The following instructions should work on jewel and above.
Step 2 - vstart¶
When the build completes, and still in the top-level directory of the git clone where you built Ceph, do the following, for cmake builds:
cd build/ RGW=1 ../src/vstart.sh -n
This will produce a lot of output as the vstart cluster is started up. At the end you should see a message like:
started. stop.sh to stop. see out/* (e.g. 'tail -f out/????') for debug output.
This means the cluster is running.
Step 3 - run s3-tests¶
To run the s3tests suite do the following:
Running test using vstart_runner.py¶
CephFS and Ceph Manager code is be tested using vstart_runner.py.
Running your first test¶
$ virtualenv --python=python3 venv $ source venv/bin/activate $ pip install 'setuptools >= 12' $ pip install git+https://github.com/ceph/teuthology#egg=teuthology[test] $ deactivate
The above steps installs teuthology in a virtual environment. Before running a test locally, build Ceph successfully from the source (refer Build Ceph) and do:
$ cd build $ ../src/vstart.sh -n -d -l $ source ~/path/to/teuthology/venv/bin/activate
$ python ../qa/tasks/vstart_runner.py tasks.cephfs.test_client_recovery.TestClientRecovery.test_reconnect_timeout
The above command runs vstart_runner.py and passes the test to be executed as an argument to vstart_runner.py. In a similar way, you can also run the group of tests in the following manner:
$ # run all tests in class TestClientRecovery $ python ../qa/tasks/vstart_runner.py tasks.cephfs.test_client_recovery.TestClientRecovery $ # run all tests in test_client_recovery.py $ python ../qa/tasks/vstart_runner.py tasks.cephfs.test_client_recovery
Based on the argument passed, vstart_runner.py collects tests and executes as it would execute a single test.
vstart_runner.py can take the following options -
deletes old log file before running the test
create Ceph cluster before running a test
creates the cluster and quits; tests can be issued later
drops a Python shell when a test fails
logs ps output; might be useful while debugging
tears Ceph cluster down after test(s) has finished runnng
use the kernel cephfs client instead of FUSE
specify a new net/mask for the mount clients’ network namespace container (Default: 192.168.0.0/16)
If using the FUSE client, ensure that the fuse package is installed
and enabled on the system and that
user_allow_other is added
If using the kernel client, the user must have the ability to run commands with passwordless sudo access. A failure on the kernel client may crash the host, so it’s recommended to use this functionality within a virtual machine.
Internal working of vstart_runner.py -¶
vstart_runner.py primarily does three things -
- collects and runs the tests
vstart_runner.py setups/teardowns the cluster and collects and runs the test. This is implemented using methods
exec_test(). This is where all the options that vstart_runner.py takes are implemented along with other features like logging and copying the traceback to the bottom of the log.
- provides an interface for issuing and testing shell commands
The tests are written assuming that the cluster exists on remote machines. vstart_runner.py provides an interface to run the same tests with the cluster that exists within the local machine. This is done using the class
LocalRemoteProcesscan manage the process that executes the commands from
LocalDaemonprovides an interface to handle Ceph daemons and class
LocalFuseMountcan create and handle FUSE mounts.
- provides an interface to operate Ceph cluster
LocalCephManagerprovides methods to run Ceph cluster commands with and without admin socket and
LocalCephClusterprovides methods to set or clear