teuthology.task package

Submodules

teuthology.task.ansible module

class teuthology.task.ansible.Ansible(ctx, config)

Bases: teuthology.task.Task

A task to run ansible playbooks

Required configuration parameters:
playbook: Required; can either be a list of plays, or a path/URL to a
playbook. In the case of a path, it may be relative to the repo’s on-disk location (if a repo is provided), or teuthology’s working directory.
Optional configuration parameters:
repo: A path or URL to a repo (defaults to ‘.’). Given a repo
value of ‘foo’, ANSIBLE_ROLES_PATH is set to ‘foo/roles’
branch: If pointing to a remote git repo, use this branch. Defaults
to ‘master’.
hosts: A list of teuthology roles or partial hostnames (or a
combination of the two). ansible-playbook will only be run against hosts that match.
inventory: A path to be passed to ansible-playbook with the
–inventory-file flag; useful for playbooks that also have vars they need access to. If this is not set, we check for /etc/ansible/hosts and use that if it exists. If it does not, we generate a temporary file to use.
tags: A string including any (comma-separated) tags to be passed
directly to ansible-playbook.
skip_tags: A string of comma-separated tags that will be skipped by
passing them to ansible-playbook using –skip-tags.
vars: A dict of vars to be passed to ansible-playbook via the
–extra-vars flag
group_vars: A dict with keys matching relevant group names in the
playbook, and values to be written in the corresponding inventory’s group_vars files. Only applies to inventories generated by this task.
cleanup: If present, the given or generated playbook will be run
again during teardown with a ‘cleanup’ var set to True. This will allow the playbook to clean up after itself, if the playbook supports this feature.
reconnect: If set to True (the default), then reconnect to hosts after
ansible-playbook completes. This is in case the playbook makes changes to the SSH configuration, or user accounts - we would want to reflect those changes immediately.

Examples:

tasks: - ansible:

repo: https://github.com/ceph/ceph-cm-ansible.git playbook:

  • roles: - some_role - another_role
hosts:
  • client.0
  • host1

tasks: - ansible:

repo: /path/to/repo inventory: /path/to/inventory playbook: /path/to/playbook.yml tags: my_tags skip_tags: my_skipped_tags vars:

var1: string_value var2:

  • list_item
var3:
key: value
begin()

Execute the main functionality of the task

execute_playbook(_logfile=None)

Execute ansible-playbook

Parameters:_logfile – Use this file-like object instead of a LoggerFile for testing
failure_log
find_repo()

Locate the repo we’re using; cloning it from a remote repo if necessary

generate_inventory()

Generate a hosts (inventory) file to use. This should not be called if we’re using an existing file.

generate_playbook()

Generate a playbook file to use. This should not be called if we’re using an existing file.

get_inventory()

Determine whether or not we’re using an existing inventory file

get_playbook()

If necessary, fetch and read the playbook file

inventory_group = None
setup()

Perform any setup that is needed by the task before it executes

teardown()

Perform any work needed to restore configuration to a previous state.

Can be skipped by setting ‘skip_teardown’ to True in self.config

class teuthology.task.ansible.CephLab(ctx, config)

Bases: teuthology.task.ansible.Ansible

A very simple subclass of Ansible that defaults to:

If a dynamic inventory is used, all hosts will be assigned to the group ‘testnodes’.

begin()

Execute the main functionality of the task

inventory_group = 'testnodes'
name = 'ansible.cephlab'
class teuthology.task.ansible.LoggerFile(logger, level)

Bases: object

A thin wrapper around a logging.Logger instance that provides a file-like interface.

Used by Ansible.execute_playbook() when it calls pexpect.run()

flush()
write(string)
teuthology.task.ansible.cephlab

alias of teuthology.task.ansible.CephLab

teuthology.task.ansible.task

alias of teuthology.task.ansible.Ansible

teuthology.task.args module

These routines only appear to be used by the peering_speed tests.

teuthology.task.args.argify(name, args)

Object used as a decorator for the peering speed tests. See peering_spee_test.py

teuthology.task.args.gen_args(name, args)

Called from argify to generate arguments.

teuthology.task.background_exec module

Background task

teuthology.task.background_exec.task(*args, **kwds)

Run a background task.

Run the given command on a client, similar to exec. However, when we hit the finally because the subsequent task is ready to exit, kill the child process.

We do not do any error code checking here since we are forcefully killing off the child when we are done.

If the command a list, we simply join it with ;’s.

Example:

tasks: - install: - background_exec:

client.0: while true ; do date ; sleep 1 ; done client.1:

  • while true
  • do id
  • sleep 1
  • done
  • exec:
    client.0:
    • sleep 10

teuthology.task.ceph_ansible module

class teuthology.task.ceph_ansible.CephAnsible(ctx, config)

Bases: teuthology.task.Task

A task to setup ceph cluster using ceph-ansible

  • ceph-ansible:

    repo: https://github.com/ceph/ceph-ansible.git branch: mybranch # defaults to master ansible-version: 2.4 # defaults to 2.5 vars:

    ceph_dev: True ( default) ceph_conf_overrides:

    global:

    mon pg warn min per osd: 2

It always uses a dynamic inventory.

It will optionally do the following automatically based on vars that are passed in:

  • Set devices for each host if osd_auto_discovery is not True
  • Set monitor_interface for each host if monitor_interface is unset
  • Set public_network for each host if public_network is unset

The machine that ceph-ansible runs on can be specified using the installer.0 role. If installer.0 is not used, the first mon will be the machine on which ceph-ansible runs.

begin()

Execute the main functionality of the task

collect_logs()
execute_playbook()

Execute ansible-playbook

Parameters:_logfile – Use this file-like object instead of a LoggerFile for testing
fix_keyring_permission()
generate_hosts_file()
get_host_vars(remote)
groups_to_roles = {'clients': 'client', 'mdss': 'mds', 'mgrs': 'mgr', 'mons': 'mon', 'nfss': 'nfs', 'osds': 'osd', 'rgws': 'rgw'}
name = 'ceph_ansible'
run_playbook()
run_rh_playbook()
setup()

Perform any setup that is needed by the task before it executes

teardown()

Perform any work needed to restore configuration to a previous state.

Can be skipped by setting ‘skip_teardown’ to True in self.config

wait_for_ceph_health()
exception teuthology.task.ceph_ansible.CephAnsibleError

Bases: exceptions.Exception

teuthology.task.ceph_ansible.task

alias of teuthology.task.ceph_ansible.CephAnsible

teuthology.task.cephmetrics module

class teuthology.task.cephmetrics.CephMetrics(ctx, config)

Bases: teuthology.task.ansible.Ansible

begin()

Execute the main functionality of the task

generate_inventory()

Generate a hosts (inventory) file to use. This should not be called if we’re using an existing file.

get_inventory()

Determine whether or not we’re using an existing inventory file

run_tests()
teuthology.task.cephmetrics.task

alias of teuthology.task.cephmetrics.CephMetrics

teuthology.task.clock module

Clock synchronizer

teuthology.task.clock.check(*args, **kwds)

Run ntpq at the start and the end of the task.

Parameters:
  • ctx – Context
  • config – Configuration
teuthology.task.clock.task(*args, **kwds)

Sync or skew clock

This will initially sync the clocks. Eventually it should let us also skew by some number of seconds.

example:

tasks: - clock: - ceph: - interactive:

to sync.

Parameters:
  • ctx – Context
  • config – Configuration

teuthology.task.common_fs_utils module

Common filesystem related utilities. Originally this code was part of rbd.py. It was broken out so that it could be used by other modules (tgt.py and iscsi.py for instance).

teuthology.task.common_fs_utils.default_image_name(role)

Image name used by rbd and iscsi

teuthology.task.common_fs_utils.generic_mkfs(*args, **kwds)

Create a filesystem (either rbd or tgt, depending on devname_rtn)

Rbd for example, now makes the following calls:
  • rbd.create_image: [client.0]
  • rbd.modprobe: [client.0]
  • rbd.dev_create: [client.0]
  • common_fs_utils.generic_mkfs: [client.0]
  • common_fs_utils.generic_mount:
    client.0: testimage.client.0
teuthology.task.common_fs_utils.generic_mount(*args, **kwds)

Generic Mount an rbd or tgt image.

Rbd for example, now makes the following calls:
  • rbd.create_image: [client.0]
  • rbd.modprobe: [client.0]
  • rbd.dev_create: [client.0]
  • common_fs_utils.generic_mkfs: [client.0]
  • common_fs_utils.generic_mount:
    client.0: testimage.client.0

teuthology.task.console_log module

class teuthology.task.console_log.ConsoleLog(ctx=None, config=None)

Bases: teuthology.task.Task

begin()

Execute the main functionality of the task

enabled = True
end()

Perform any work needed to stop processes started in begin()

filter_hosts()

Look for a ‘hosts’ list in self.config. Each item in the list may either be a role or a hostname. Builds a new Cluster object containing only those hosts which match one (or more) of the roles or hostnames specified. The filtered Cluster object is stored as self.cluster so that the task may only run against those hosts.

logfile_name = '{shortname}.log'
name = 'console_log'
setup()

Perform any setup that is needed by the task before it executes

setup_archive()
start_logging()
stop_logging(force=False)
teardown()

Perform any work needed to restore configuration to a previous state.

Can be skipped by setting ‘skip_teardown’ to True in self.config

teuthology.task.console_log.task

alias of teuthology.task.console_log.ConsoleLog

teuthology.task.dump_ctx module

teuthology.task.dump_ctx.task(ctx, config)

Dump task context and config in teuthology log/output

The intended use case is didactic - to provide an easy way for newbies, who are working on teuthology tasks for the first time, to find out what is inside the ctx and config variables that are passed to each task.

teuthology.task.exec module

Exececute custom commands

teuthology.task.exec.task(ctx, config)

Execute commands on a given role

tasks: - ceph: - kclient: [client.a] - exec:

client.a:
  • “echo ‘module libceph +p’ > /sys/kernel/debug/dynamic_debug/control”
  • “echo ‘module ceph +p’ > /sys/kernel/debug/dynamic_debug/control”
  • interactive:

It stops and fails with the first command that does not return on success. It means that if the first command fails, the second won’t run at all.

To avoid confusion it is recommended to explicitly enclose the commands in double quotes. For instance if the command is false (without double quotes) it will be interpreted as a boolean by the YAML parser.

Parameters:
  • ctx – Context
  • config – Configuration

teuthology.task.full_sequential module

Task sequencer - full

teuthology.task.full_sequential.task(ctx, config)

Run a set of tasks to completion in order. __exit__ is called on a task before __enter__ on the next

example: - full_sequential:

  • tasktest:
  • tasktest:
Parameters:
  • ctx – Context
  • config – Configuration

teuthology.task.full_sequential_finally module

Task sequencer finally

teuthology.task.full_sequential_finally.task(*args, **kwds)

Sequentialize a group of tasks into one executable block, run on cleanup

example:

tasks:
- foo:
- full_sequential_finally:
  - final1:
  - final2:
- bar:
- baz:

The final1 and final2 tasks will run when full_sequentiall_finally is torn down, after the nested bar and baz tasks have run to completion, and right before the preceding foo task is torn down. This is useful if there are additional steps you want to interject in a job during the shutdown (instead of startup) phase.

Parameters:
  • ctx – Context
  • config – Configuration

teuthology.task.hadoop module

teuthology.task.hadoop.configure(ctx, config, hadoops)
teuthology.task.hadoop.dict_to_hadoop_conf(items)
teuthology.task.hadoop.get_core_site_data(ctx, config)
teuthology.task.hadoop.get_hdfs_site_data(ctx)
teuthology.task.hadoop.get_mapred_site_data(ctx)
teuthology.task.hadoop.get_masters_data(ctx)
teuthology.task.hadoop.get_slaves_data(ctx)
teuthology.task.hadoop.get_yarn_site_data(ctx)
teuthology.task.hadoop.install_hadoop(*args, **kwds)
teuthology.task.hadoop.is_hadoop_type(type_)
teuthology.task.hadoop.start_hadoop(*args, **kwds)
teuthology.task.hadoop.task(*args, **kwds)

teuthology.task.interactive module

Drop into a python shell

teuthology.task.interactive.task(ctx, config)

Run an interactive Python shell, with the cluster accessible via the ctx variable.

Hit control-D to continue.

This is also useful to pause the execution of the test between two tasks, either to perform ad hoc operations, or to examine the state of the cluster. You can also use it to easily bring up a Ceph cluster for ad hoc testing.

For example:

tasks:
- ceph:
- interactive:

teuthology.task.iscsi module

Handle iscsi adm commands for tgt connections.

teuthology.task.iscsi.file_io_test(rem, file_from, lnkpath)

dd to the iscsi inteface, read it, and compare with original

teuthology.task.iscsi.general_io_test(ctx, rem, image_name)

Do simple I/O tests to the iscsi interface before putting a filesystem on it.

teuthology.task.iscsi.start_iscsi_initiators(*args, **kwds)

This is the sub-task that assigns an rbd to an iscsiadm control and performs a login (thereby creating a /dev/sd device). It performs a logout when finished.

teuthology.task.iscsi.task(*args, **kwds)

handle iscsi admin login after a tgt connection has been established.

Assume a default host client of client.0 and a sending client of client.0 if not specified otherwise.

Sample tests could be:

iscsi:

This sets up a tgt link from client.0 to client.0

iscsi: [client.1, client.2]

This sets up a tgt link from client.1 to client.0 and a tgt link from client.2 to client.0
iscsi:

client.0: client.1 client.1: client.0

This sets up a tgt link from client.0 to client.1 and a tgt link from client.1 to client.0

Note that the iscsi image name is iscsi-image, so this only works for one image being tested at any one time.

teuthology.task.iscsi.tgt_devname_get(ctx, test_image)

Get the name of the newly created device by following the by-path link (which is symbolically linked to the appropriate /dev/sd* file).

teuthology.task.iscsi.tgt_devname_rtn(ctx, test_image)

Wrapper passed to common_fs_util functions.

teuthology.task.kernel module

Kernel installation task

teuthology.task.kernel.download_kernel(ctx, config)
Supply each remote with a kernel package:
  • local kernels are copied over
  • gitbuilder kernels are downloaded
  • nothing is done for distro kernels
Parameters:
  • ctx – Context
  • config – Configuration
teuthology.task.kernel.enable_disable_kdb(ctx, config)

Enable kdb on remote machines in use. Disable on those that are not in use.

Parameters:
  • ctx – Context
  • config – Configuration
teuthology.task.kernel.generate_legacy_grub_entry(remote, newversion)

This will likely need to be used for ceph kernels as well as legacy grub rpm distros don’t have an easy way of selecting a kernel just via a command. This generates an entry in legacy grub for a new kernel version using the existing entry as a base.

teuthology.task.kernel.get_image_version(remote, path)

Get kernel image version from (rpm or deb) package.

Parameters:path – (rpm or deb) package path
teuthology.task.kernel.get_latest_image_version_deb(remote, ostype)

Get kernel image version of the newest kernel deb package. Used for distro case.

Round-about way to get the newest kernel uname -r compliant version string from the virtual package which is the newest kenel for debian/ubuntu.

teuthology.task.kernel.get_latest_image_version_rpm(remote)

Get kernel image version of the newest kernel rpm package. Used for distro case.

teuthology.task.kernel.get_sha1_from_pkg_name(path)

Get commit hash (min 12 max 40 chars) from (rpm or deb) package name. Example package names (“make bindeb-pkg” and “make binrpm-pkg”):

linux-image-4.9.0-rc4-ceph-g156db39ecfbd_4.9.0-rc4-ceph-g156db39ecfbd-1_amd64.deb kernel-4.9.0_rc4_ceph_g156db39ecfbd-2.x86_64.rpm
Parameters:path – (rpm or deb) package path (only basename is used)
teuthology.task.kernel.gitbuilder_pkg_name(remote)
teuthology.task.kernel.grub2_kernel_select_generic(remote, newversion, ostype)

Can be used on DEB and RPM. Sets which entry should be boted by entrynum.

teuthology.task.kernel.install_and_reboot(ctx, config)

Install and reboot the kernel. This mostly performs remote installation operations. The code does check for Arm images and skips grub operations if the kernel is Arm. Otherwise, it extracts kernel titles from submenu entries and makes the appropriate grub calls. The assumptions here are somewhat simplified in that it expects kernel entries to be present under submenu entries.

Parameters:
  • ctx – Context
  • config – Configuration
teuthology.task.kernel.install_firmware(ctx, config)

Go to the github to get the latest firmware.

Parameters:
  • ctx – Context
  • config – Configuration
teuthology.task.kernel.install_kernel(remote, path=None, version=None)

A bit of misnomer perhaps - the actual kernel package is installed elsewhere, this function deals with initrd and grub. Currently the following cases are handled:

  • local, gitbuilder, distro for rpm packages
  • distro for deb packages - see TODO in install_and_reboot()

TODO: reboots should be issued from install_and_reboot()

Parameters:
  • path – package path (for local and gitbuilder cases)
  • version – for RPM distro kernels, pass this to update_grub_rpm
teuthology.task.kernel.install_latest_rh_kernel(ctx, config)

Installs the lastest z stream kernel Reboot for the new kernel to take effect

teuthology.task.kernel.maybe_generate_initrd_rpm(remote, path, version)

Generate initrd with mkinitrd if the hooks that should make it happen on its own aren’t there.

Parameters:
  • path – rpm package path
  • version – kernel version to generate initrd for e.g. 3.18.0-rc6-ceph-00562-g79a9fa5
teuthology.task.kernel.need_to_install(ctx, role, version)

Check to see if we need to install a kernel. Get the version of the currently running kernel, and compare it against the value passed in.

Parameters:
  • ctx – Context
  • role – Role
  • version – value to compare against (used in checking), can be either a utsrelease string (e.g. ‘3.13.0-rc3-ceph-00049-ge2817b3’) or a sha1.
teuthology.task.kernel.need_to_install_distro(remote)

Installing kernels on rpm won’t setup grub/boot into them. This installs the newest kernel package and checks its version and compares against the running kernel (uname -r). Similar check for deb.

Returns:False if running the newest distro kernel. Returns the version of the newest if it is not running.
teuthology.task.kernel.normalize_and_apply_overrides(ctx, config, overrides)

kernel task config is hierarchical and needs to be transformed into a normal form, see normalize_config() for details. Applying overrides is also more involved compared to other tasks because of the number of ways a version of the kernel to install can be specified.

Returns a (normalized config, timeout) tuple.

Parameters:
  • ctx – Context
  • config – Configuration
teuthology.task.kernel.normalize_config(ctx, config)

Returns a config whose keys are all real roles. Generic roles (client, mon, osd, etc.) are replaced with the actual roles (client.0, client.1, etc.). If the config specifies a different version for a specific role, this is unchanged.

For example, with 4 OSDs this:

osd:
  tag: v3.0
  kdb: true
osd.1:
  branch: new_btrfs
  kdb: false
osd.3:
  deb: /path/to/linux-whatever.deb

is transformed into:

osd.0:
  tag: v3.0
  kdb: true
osd.1:
  branch: new_btrfs
  kdb: false
osd.2:
  tag: v3.0
  kdb: true
osd.3:
  deb: /path/to/linux-whatever.deb

If config is None or just specifies a version to use, it is applied to all nodes.

Parameters:
  • ctx – Context
  • config – Configuration
teuthology.task.kernel.remote_pkg_path(remote)

This is where kernel packages are copied over (in case of local packages) or downloaded to (in case of gitbuilder packages) and then installed from.

teuthology.task.kernel.remove_old_kernels(ctx)
teuthology.task.kernel.task(ctx, config)

Make sure the specified kernel is installed. This can be a branch, tag, or sha1 of ceph-client.git or a local kernel package.

To install ceph-client.git branch (default: master):

kernel:
  branch: testing

To install ceph-client.git tag:

kernel:
  tag: v3.18

To install ceph-client.git sha1:

kernel:
  sha1: 275dd19ea4e84c34f985ba097f9cddb539f54a50

To install from a koji build_id:

kernel:
  koji: 416058

To install from a koji task_id:

kernel:
  koji_task: 9678206

When installing from koji you also need to set the urls for koji hub and the koji root in your teuthology.yaml config file. These are shown below with their default values:

kojihub_url: http://koji.fedoraproject.org/kojihub
kojiroot_url: http://kojipkgs.fedoraproject.org/packages

When installing from a koji task_id you also need to set koji_task_url, which is the base url used to download rpms from koji task results:

koji_task_url: https://kojipkgs.fedoraproject.org/work/

To install local rpm (target should be an rpm system):

kernel:
  rpm: /path/to/appropriately-named.rpm

To install local deb (target should be a deb system):

kernel:
  deb: /path/to/appropriately-named.deb

For rpm: or deb: to work it should be able to figure out sha1 from local kernel package basename, see get_sha1_from_pkg_name(). This means that you can’t for example install a local tag - package built with upstream {rpm,deb}-pkg targets won’t have a sha1 in its name.

If you want to schedule a run and use a local kernel package, you have to copy the package over to a box teuthology workers are running on and specify a path to the package on that box.

All of the above will install a specified kernel on all targets. You can specify different kernels for each role or for all roles of a certain type (more specific roles override less specific, see normalize_config() for details):

kernel:
  client:
    tag: v3.0
  osd:
    branch: btrfs_fixes
  client.1:
    branch: more_specific
  osd.3:
    branch: master

To wait 3 minutes for hosts to reboot (default: 300):

kernel:
  timeout: 180

To enable kdb:

kernel:
  kdb: true
Parameters:
  • ctx – Context
  • config – Configuration
teuthology.task.kernel.update_grub_rpm(remote, newversion)

Updates grub file to boot new kernel version on both legacy grub/grub2.

teuthology.task.kernel.update_rh_kernel(remote)
teuthology.task.kernel.validate_config(ctx, config)

Make sure that all kernels in the list of remove kernels refer to the same kernel.

Parameters:
  • ctx – Context
  • config – Configuration
teuthology.task.kernel.wait_for_reboot(ctx, need_install, timeout, distro=False)

Loop reconnecting and checking kernel versions until they’re all correct or the timeout is exceeded.

Parameters:
  • ctx – Context
  • need_install – list of packages that we need to reinstall.
  • timeout – number of second before we timeout.

teuthology.task.knfsd module

Export/Unexport a nfs server client.

teuthology.task.knfsd.get_nfsd_args(remote, cmd)
teuthology.task.knfsd.task(*args, **kwds)

Export/Unexport a nfs server client.

The config is optional and defaults to exporting on all clients. If a config is given, it is expected to be a list or dict of clients to do this operation on. You must have specified ceph-fuse or kclient on all clients specified for knfsd.

Example that exports all clients:

tasks:
- ceph:
- kclient:
- knfsd:
- interactive:

Example that uses both kclient` and ``ceph-fuse:

tasks:
- ceph:
- ceph-fuse: [client.0]
- kclient: [client.1]
- knfsd: [client.0, client.1]
- interactive:

Example that specifies export options:

tasks:
- ceph:
- kclient: [client.0, client.1]
- knfsd:
    client.0:
      options: [rw,root_squash]
    client.1:
- interactive:

Note that when options aren’t specified, rw,no_root_squash is the default. When you specify options, the defaults are as specified by exports(5).

So if empty options are specified, i.e. options: [] these are the defaults:
ro,sync,wdelay,hide,nocrossmnt,secure,root_squash,no_all_squash, no_subtree_check,secure_locks,acl,anonuid=65534,anongid=65534
Parameters:
  • ctx – Context
  • config – Configuration

teuthology.task.localdir module

Localdir

teuthology.task.localdir.task(*args, **kwds)

Create a mount dir ‘client’ that is just the local disk:

Example that “mounts” all clients:

tasks: - localdir: - interactive:

Example for a specific client:

tasks: - localdir: [client.2] - interactive:
Parameters:
  • ctx – Context
  • config – Configuration

teuthology.task.lockfile module

Locking tests

teuthology.task.lockfile.lock_one(op, ctx)

Perform the individual lock

teuthology.task.lockfile.task(ctx, config)

This task is designed to test locking. It runs an executable for each lock attempt you specify, at 0.01 second intervals (to preserve ordering of the locks). You can also introduce longer intervals by setting an entry as a number of seconds, rather than the lock dictionary. The config is a list of dictionaries. For each entry in the list, you must name the “client” to run on, the “file” to lock, and the “holdtime” to hold the lock. Optional entries are the “offset” and “length” of the lock. You can also specify a “maxwait” timeout period which fails if the executable takes longer to complete, and an “expectfail”. An example: tasks: - ceph: - ceph-fuse: [client.0, client.1] - lockfile:

[{client:client.0, file:testfile, holdtime:10}, {client:client.1, file:testfile, holdtime:0, maxwait:0, expectfail:true}, {client:client.1, file:testfile, holdtime:0, maxwait:15, expectfail:false}, 10, {client: client.1, lockfile: testfile, holdtime: 5}, {client: client.2, lockfile: testfile, holdtime: 5, maxwait: 1, expectfail: True}]

In the past this test would have failed; there was a bug where waitlocks weren’t cleaned up if the process failed. More involved scenarios are also possible.

Parameters:
  • ctx – Context
  • config – Configuration

teuthology.task.loop module

Task to loop a list of items

teuthology.task.loop.task(ctx, config)

Loop a sequential group of tasks

example: - loop:

count: 10 body:

  • tasktest:
  • tasktest:
Parameters:
  • ctx – Context
  • config – Configuration

teuthology.task.mpi module

Start mpi processes (and allow commands to be run inside process)

teuthology.task.mpi.task(ctx, config)

Setup MPI and execute commands

Example that starts an MPI process on specific clients:

tasks:
- ceph:
- ceph-fuse: [client.0, client.1]
- ssh_keys:
- mpi: 
    nodes: [client.0, client.1]
    exec: ior ...

Example that starts MPI processes on all clients:

tasks:
- ceph:
- ceph-fuse:
- ssh_keys:
- mpi:
    exec: ior ...

Example that starts MPI processes on all roles:

tasks:
- ceph:
- ssh_keys:
- mpi:
    nodes: all
    exec: ...

Example that specifies a working directory for MPI processes:

tasks: - ceph: - ceph-fuse: - pexec:

clients:
  • ln -s {testdir}/mnt.* {testdir}/gmnt
  • ssh_keys:
  • mpi:
    exec: fsx-mpi workdir: {testdir}/gmnt
  • pexec:
    clients:
    • rm -f {testdir}/gmnt
Parameters:
  • ctx – Context
  • config – Configuration

teuthology.task.nfs module

Nfs client tester

teuthology.task.nfs.task(*args, **kwds)

Mount nfs client (requires nfs server export like knfsd or ganesh)

Example that mounts a single nfs client:

tasks:
- ceph:
- kclient: [client.0]
- knfsd: [client.0]
- nfs:
    client.1:
        server: client.0
- interactive:

Example that mounts multiple nfs clients with options:

tasks:
- ceph:
- kclient: [client.0, client.1]
- knfsd: [client.0, client.1]
- nfs:
    client.2:
        server: client.0
        options: [rw,hard,intr,nfsvers=3]
    client.3:
        server: client.1
        options: [ro]
- workunit:
    clients:
        client.2:
            - suites/dbench.sh
        client.3:
            - suites/blogbench.sh

It is not recommended that the nfs client and nfs server reside on the same node. So in the example above client.0-3 should be on 4 distinct nodes. The client nfs testing would be using only client.2 and client.3.

teuthology.task.nop module

Null task

teuthology.task.nop.task(ctx, config)

This task does nothing.

For example:

tasks:
- nop:

teuthology.task.parallel module

Task to group parallel running tasks

teuthology.task.parallel.task(ctx, config)

Run a group of tasks in parallel.

example: - parallel:

  • tasktest:
  • tasktest:

You can also define tasks in a top-level section outside of ‘tasks:’, and reference them here.

The referenced section must contain a list of tasks to run sequentially, or a single task as a dict. The latter is only available for backwards compatibility with existing suites:

tasks: - parallel:

  • tasktest: # task inline
  • foo # reference to top-level ‘foo’ section
  • bar # reference to top-level ‘bar’ section

foo: - tasktest1: - tasktest2: bar:

tasktest: # note the list syntax from ‘foo’ is preferred

That is, if the entry is not a dict, we will look it up in the top-level config.

Sequential tasks and Parallel tasks can be nested.

teuthology.task.parallel_example module

Parallel contextmanager test

teuthology.task.parallel_example.parallel_test(*args, **kwds)

Example contextmanager that executes a command on remote hosts in parallel.

teuthology.task.parallel_example.sequential_test(*args, **kwds)

Example contextmanager that executes a command on remote hosts sequentially.

teuthology.task.parallel_example.task(*args, **kwds)

This is the main body of the task that gets run.

teuthology.task.pcp module

class teuthology.task.pcp.GrafanaGrapher(hosts, time_from, time_until='now', job_id=None)

Bases: teuthology.task.pcp.PCPGrapher

build_graph_url()
class teuthology.task.pcp.GraphiteGrapher(hosts, time_from, time_until='now', dest_dir=None, job_id=None)

Bases: teuthology.task.pcp.PCPGrapher

build_graph_urls()
download_graphs()
generate_html(mode='dynamic')
get_graph_url(metric)
get_target_globs(metric='')
graph_defaults = {'format': 'png', 'height': '300', 'hideLegend': 'false', 'width': '1200'}
metrics = ['kernel.all.load.1 minute', 'mem.util.free', 'mem.util.used', 'network.interface.*.bytes.*', 'disk.all.read_bytes', 'disk.all.write_bytes']
write_html(mode='dynamic')
class teuthology.task.pcp.PCP(ctx, config)

Bases: teuthology.task.Task

Collects performance data using PCP during a job.

Configuration options include:
graphite: Whether to render PNG graphs using Graphite (default:
True)
grafana: Whether to build (and submit to paddles) a link to a
dynamic Grafana dashboard containing graphs of performance data (default: True)

fetch_archives: Whether to assemble and ship a raw PCP archive containing performance data to the job’s output archive (default:

False)
begin()

Execute the main functionality of the task

enabled = True
end()

Perform any work needed to stop processes started in begin()

setup()

Perform any setup that is needed by the task before it executes

setup_archive(hosts)
setup_collectors()
setup_grafana(hosts)
setup_graphite(hosts)
class teuthology.task.pcp.PCPArchive(hosts, time_from, time_until='now')

Bases: teuthology.task.pcp.PCPDataSource

archive_base_path = '/var/log/pcp/pmlogger'
archive_file_extensions = ('0', 'index', 'meta')
get_archive_input_dir(host)
get_pmlogextract_cmd(host)
class teuthology.task.pcp.PCPDataSource(hosts, time_from, time_until='now')

Bases: object

class teuthology.task.pcp.PCPGrapher(hosts, time_from, time_until='now')

Bases: teuthology.task.pcp.PCPDataSource

teuthology.task.pcp.task

alias of teuthology.task.pcp.PCP

teuthology.task.pexec module

Handle parallel execution on remote hosts

teuthology.task.pexec.task(ctx, config)

Execute commands on multiple hosts in parallel

tasks: - ceph: - ceph-fuse: [client.0, client.1] - pexec:

client.0:
  • while true; do echo foo >> bar; done
client.1:
  • sleep 1
  • tail -f bar
  • interactive:

Execute commands on all hosts in the cluster in parallel. This is useful if there are many hosts and you want to run the same command on all:

tasks: - pexec:

all:
  • grep FAIL /var/log/ceph/*

Or if you want to run in parallel on all clients:

tasks: - pexec:

clients:
  • dd if=/dev/zero of={testdir}/mnt.* count=1024 bs=1024

You can also ensure that parallel commands are synchronized with the special ‘barrier’ statement:

tasks: - pexec:

clients:
  • cd {testdir}/mnt.*
  • while true; do
  • barrier
  • dd if=/dev/zero of=./foo count=1024 bs=1024
  • done

The above writes to the file foo on all clients over and over, but ensures that all clients perform each write command in sync. If one client takes longer to write, all the other clients will wait.

teuthology.task.print module

Print task

A task that logs whatever is given to it as an argument. Can be used like any other task (under sequential, etc…).j

For example, the following would cause the strings “String” and “Another string” to appear in the teuthology.log before and after the chef task runs, respectively.

tasks: - print: “String” - chef: null - print: “Another String”

teuthology.task.print.task(ctx, config)

Print out config argument in teuthology log/output

teuthology.task.proc_thrasher module

Process thrasher

class teuthology.task.proc_thrasher.ProcThrasher(config, remote, *proc_args, **proc_kwargs)

Kills and restarts some number of the specified process on the specified remote

join()

Local join

log(msg)

Local log wrapper

loop()

Thrashing loop – loops at time intervals. Inside that loop, the code loops through the individual procs, creating new procs.

start()

Start thrasher. This also makes sure that the greenlet interface is used.

teuthology.task.selinux module

class teuthology.task.selinux.SELinux(ctx, config)

Bases: teuthology.task.Task

A task to set the SELinux mode during test execution. Note that SELinux must first be enabled and the filesystem must have been labeled.

On teardown, also checks the audit log for any denials. By default selinux will ignore few known denials(listed below). The test will fail for any other denials seen in audit.log. For the test not to fail for other denials one can add the overrides with appropriate escapes overrides:

selinux:
whitelist: - ‘name=”cephtest”’ - ‘dmidecode’ - ‘comm=”logrotate”’ - ‘comm=”idontcare”’
Known denials which are ignored:
comm=”dmidecode” chronyd.service name=”cephtest”

Automatically skips hosts running non-RPM-based OSes.

archive_log()
filter_hosts()

Exclude any non-RPM-based hosts, and any downburst VMs

get_denials()

Look for denials in the audit log

get_modes()

Get the current SELinux mode from each host so that we can restore during teardown

get_new_denials()

Determine if there are any new denials in the audit log

restore_modes()

If necessary, restore previous SELinux modes

rotate_log()
set_mode()

Set the requested SELinux mode

setup()

Perform any setup that is needed by the task before it executes

teardown()

Perform any work needed to restore configuration to a previous state.

Can be skipped by setting ‘skip_teardown’ to True in self.config

teuthology.task.selinux.task

alias of teuthology.task.selinux.SELinux

teuthology.task.sequential module

Task sequencer

teuthology.task.sequential.task(ctx, config)

Sequentialize a group of tasks into one executable block

example: - sequential:

  • tasktest:
  • tasktest:

You can also reference the job from elsewhere:

foo:
tasktest:

tasks: - sequential:

  • tasktest:
  • foo
  • tasktest:

That is, if the entry is not a dict, we will look it up in the top-level config.

Sequential tasks and Parallel tasks can be nested.

Parameters:
  • ctx – Context
  • config – Configuration

teuthology.task.sleep module

Sleep task

teuthology.task.sleep.task(ctx, config)

Sleep for some number of seconds.

Example:

tasks:
- install:
- ceph:
- sleep:
    duration: 10
- interactive:
Parameters:
  • ctx – Context
  • config – Configuration

teuthology.task.ssh_keys module

Ssh-key key handlers and associated routines

teuthology.task.ssh_keys.backup_file(remote, path, sudo=False)

Creates a backup of a file on the remote, simply by copying it and adding a timestamp to the name.

teuthology.task.ssh_keys.cleanup_added_key(ctx, key_backup_files, path)

Delete the keys and removes ~/.ssh/authorized_keys entries we added

teuthology.task.ssh_keys.generate_keys()

Generatees a public and private key

teuthology.task.ssh_keys.particular_ssh_key_test(line_to_test, ssh_key)

Check the validity of the ssh_key

teuthology.task.ssh_keys.push_keys_to_host(*args, **kwds)

Push keys to all hosts

teuthology.task.ssh_keys.ssh_keys_user_line_test(line_to_test, username)

Check the validity of the username

teuthology.task.ssh_keys.task(*args, **kwds)

Creates a set of RSA keys, distributes the same key pair to all hosts listed in ctx.cluster, and adds all hosts to all others authorized_keys list.

During cleanup it will delete .ssh/id_rsa, .ssh/id_rsa.pub and remove the entries in .ssh/authorized_keys while leaving pre-existing entries in place.

teuthology.task.ssh_keys.timestamp(format_='%Y-%m-%d_%H:%M:%S:%f')

Return a UTC timestamp suitable for use in filenames

teuthology.task.ssh_keys.tweak_ssh_config(*args, **kwds)

Turn off StrictHostKeyChecking

teuthology.task.tasktest module

Parallel and sequential task tester. Not used by any ceph tests, but used to unit test the parallel and sequential tasks

teuthology.task.tasktest.task(*args, **kwds)

Task that just displays information when it is create and when it is destroyed/cleaned up. This task was used to test parallel and sequential task options.

example:

tasks: - sequential:

  • tasktest:
    • id: ‘foo’
  • tasktest:
    • id: ‘bar’
    • delay:5
  • tasktest:

The above yaml will sequentially start a test task named foo and a test task named bar. Bar will take 5 seconds to complete. After foo and bar have finished, an unidentified tasktest task will run.

teuthology.task.timer module

Timer task

teuthology.task.timer.task(*args, **kwds)

Timer

Measure the time that this set of tasks takes and save that value in the summary file. Config is a description of what we are timing.

example:

tasks: - ceph: - foo: - timer: “fsx run” - fsx:

Module contents

class teuthology.task.Task(ctx=None, config=None)

Bases: object

A base-class for “new-style” teuthology tasks.

Can be used as a drop-in replacement for the old-style task functions with @contextmanager decorators.

Note: While looking up overrides, we use the lowercase name of the class by

default. While this works well for the main task in a module, other tasks or ‘subtasks’ may want to override that name using a class variable called ‘name’ e.g.:

class MyTask(Task):
pass
class MySubtask(MyTask):
name = ‘mytask.mysubtask’
apply_overrides()

Look for an ‘overrides’ dict in self.ctx.config; look inside that for a dict with the same name as this task. Override any settings in self.config with those overrides

begin()

Execute the main functionality of the task

end()

Perform any work needed to stop processes started in begin()

filter_hosts()

Look for a ‘hosts’ list in self.config. Each item in the list may either be a role or a hostname. Builds a new Cluster object containing only those hosts which match one (or more) of the roles or hostnames specified. The filtered Cluster object is stored as self.cluster so that the task may only run against those hosts.

setup()

Perform any setup that is needed by the task before it executes

teardown()

Perform any work needed to restore configuration to a previous state.

Can be skipped by setting ‘skip_teardown’ to True in self.config