Notice

This document is for a development version of Ceph.

NVMe/TCP Initiator for VMware ESX

Prerequisites

  • A VMware ESXi host running VMware vSphere Hypervisor (ESXi) 7.0U3 version or later.

  • Deployed Ceph NVMe-oF gateway.

  • Ceph cluster with NVMe-oF configuration.

  • Subsystem defined in the gateway.

Configuration

The following instructions will use the default vSphere web client and esxcli.

  1. Enable NVMe/TCP on a NIC:

    esxcli nvme fabric enable --protocol TCP --device vmnicN
    

    Replace N with the number of the NIC.

  2. Tag a VMKernel NIC to permit NVMe/TCP traffic:

    esxcli network uip interface tag add --interface-nme vmkN --tagname NVMeTCP
    

    Replace N with the ID of the VMkernel.

  3. Configure the VMware ESXi host for NVMe/TCP:

    1. List the NVMe-oF adapter:

      esxcli nvme adapter list
      
    2. Discover NVMe-oF subsystems:

      esxcli nvme fabric discover -a NVME_TCP_ADAPTER -i GATEWAY_IP -p 4420
      
    3. Connect to NVME-oF gateway subsystem:

      esxcli nvme connect -a NVME_TCP_ADAPTER -i GATEWAY_IP -p 4420 -s SUBSYSTEM_NQN
      
    4. List the NVMe/TCP controllers:

      esxcli nvme controller list
      
    5. List the NVMe-oF namespaces in the subsystem:

      esxcli nvme namespace list
      
  4. Verify that the initiator has been set up correctly:

    1. From the vSphere client go to the ESXi host.

    2. On the Storage page go to the Devices tab.

    3. Verify that the NVME/TCP disks are listed in the table.

Brought to you by the Ceph Foundation

The Ceph Documentation is a community resource funded and hosted by the non-profit Ceph Foundation. If you would like to support this and our other efforts, please consider joining now.