This document is for a development version of Ceph.

Analyzing and Debugging A Teuthology Job

To learn more about how to schedule an integration test, refer to Scheduling Test Run.

Viewing Test Results

When a teuthology run has been completed successfully, use pulpito dashboard to view the results:<job-name>/<job-id>/

or ssh into the teuthology server to view the results of the integration test:

ssh <username>

and access teuthology archives, as in this example:

nano /a/teuthology-2021-01-06_07:01:02-rados-master-distro-basic-smithi/


This requires you to have access to the Sepia lab. To learn how to request access to the Sepia lab, see:

Identifying Failed Jobs

On pulpito, a job in red means either a failed job or a dead job. A job is combination of daemons and configurations defined in the yaml fragments in qa/suites . Teuthology uses these configurations and runs the tasks listed in qa/tasks, which are commands that set up the test environment and test Ceph’s components. These tasks cover a large subset of use cases and help to expose bugs not exposed by make check testing.

A job failure might be caused by one or more of the following reasons:

  • environment setup (testing on varied systems): testing compatibility with stable releases for supported versions.

  • permutation of config values: for instance, qa/suites/rados/thrash ensures that we run thrashing tests against Ceph under stressful workloads so that we can catch corner-case bugs. The final setup config yaml file used for testing can be accessed at:


More details about config.yaml can be found at detailed test config

Triaging the cause of failure

When a job fails, you will need to read its teuthology log in order to triage the cause of its failure. Use the job’s name and id from pulpito to locate your failed job’s teuthology log:<job-name>/<job-id>/teuthology.log

Open the log file:


For example:

nano /a/teuthology-2021-01-06_07:01:02-rados-master-distro-basic-smithi/5759282/teuthology.log

Every job failure is recorded in the teuthology log as a Traceback and is added to the job summary.

Find the Traceback keyword and search the call stack and the logs for issues that caused the failure. Usually the traceback will include the command that failed.


The teuthology logs are deleted from time to time. If you are unable to access the link in this example, just use any other case from

Reporting the Issue

In short: first check to see if your job failure was caused by a known issue, and if it wasn’t, raise a tracker ticket.

After you have triaged the cause of the failure and you have determined that it wasn’t caused by the changes that you made to the code, this might indicate that you have encountered a known failure in the upstream branch (in the example we’re considering in this section, the upstream branch is “octopus”). If the failure was not caused by the changes you made to the code, go to and look for tracker issues related to the failure by using keywords spotted in the failure under investigation.

If you find a similar issue on, leave a comment on that issue explaining the failure as you understand it and make sure to include a link to your recent test run. If you don’t find a similar issue, create a new tracker ticket for this issue and explain the cause of your job’s failure as thoroughly as you can. If you’re not sure what caused the job’s failure, ask one of the team members for help.

Debugging an issue using interactive-on-error

When you encounter a job failure during testing, you should attempt to reproduce it. This is where --interactive-on-error comes in. This section explains how to use interactive-on-error and what it does.

When you have verified that a job has failed, run the same job again in teuthology but add the interactive-on-error flag:

ideepika@teuthology:~/teuthology$ ./virtualenv/bin/teuthology -v --lock --block $<your-config-yaml> --interactive-on-error

Use either custom config.yaml or the yaml file from the failed job. If you use the yaml file from the failed job, copy orig.config.yaml to your local directory:

ideepika@teuthology:~/teuthology$ cp /a/teuthology-2021-01-06_07:01:02-rados-master-distro-basic-smithi/5759282/orig.config.yaml test.yaml
ideepika@teuthology:~/teuthology$ ./virtualenv/bin/teuthology -v --lock --block test.yaml --interactive-on-error

If a job fails when the interactive-on-error flag is used, teuthology will lock the machines required by config.yaml. Teuthology will halt the testing machines and hold them in the state that they were in at the time of the job failure. You will be put into an interactive python session. From there, you can ssh into the system to investigate the cause of the job failure.

After you have investigated the failure, just terminate the session. Teuthology will then clean up the session and unlock the machines.

Suggested Resources