CEPH-BLUESTORE-TOOL(8) | Ceph | CEPH-BLUESTORE-TOOL(8) |
NAME¶
ceph-bluestore-tool - bluestore administrative tool
SYNOPSIS¶
ceph-bluestore-tool command [ --dev device ... ] [ --path osd path ] [ --out-dir dir ] [ --log-file | -l filename ] [ --deep ] ceph-bluestore-tool fsck|repair --path osd path [ --deep ] ceph-bluestore-tool show-label --dev device ... ceph-bluestore-tool prime-osd-dir --dev device --path osd path ceph-bluestore-tool bluefs-export --path osd path --out-dir dir ceph-bluestore-tool bluefs-bdev-new-wal --path osd path --dev-target new-device ceph-bluestore-tool bluefs-bdev-new-db --path osd path --dev-target new-device ceph-bluestore-tool bluefs-bdev-migrate --path osd path --dev-target new-device --devs-source device1 [--devs-source device2] ceph-bluestore-tool free-dump|free-score --path osd path [ --allocator block/bluefs-wal/bluefs-db/bluefs-slow ] ceph-bluestore-tool reshard --path osd path --sharding new sharding [ --sharding-ctrl control string ] ceph-bluestore-tool show-sharding --path osd path
DESCRIPTION¶
ceph-bluestore-tool is a utility to perform low-level administrative operations on a BlueStore instance.
COMMANDS¶
help
fsck [ --deep ]
repair
bluefs-export
bluefs-bdev-sizes --path osd path
bluefs-bdev-expand --path osd path
bluefs-bdev-new-wal --path osd path --dev-target new-device
bluefs-bdev-new-db --path osd path --dev-target new-device
bluefs-bdev-migrate --dev-target new-device --devs-source device1 [--devs-source device2]
- if source list has DB volume - target device replaces it.
- if source list has WAL volume - target device replace it.
- if source list has slow volume only - operation isn't permitted, requires explicit allocation via new-db/new-wal command.
show-label --dev device [...]
free-dump --path osd path [ --allocator block/bluefs-wal/bluefs-db/bluefs-slow ]
free-score --path osd path [ --allocator block/bluefs-wal/bluefs-db/bluefs-slow ]
reshard --path osd path --sharding new sharding [ --resharding-ctrl control string ]
show-sharding --path osd path
OPTIONS¶
- --dev *device*
- Add device to the list of devices to consider
- --devs-source *device*
- Add device to the list of devices to consider as sources for migrate operation
- --dev-target *device*
- Specify target device migrate operation or device to add for adding new DB/WAL.
- --path *osd path*
- Specify an osd path. In most cases, the device list is inferred from the symlinks present in osd path. This is usually simpler than explicitly specifying the device(s) with --dev.
- --out-dir *dir*
- Output directory for bluefs-export
- -l, --log-file *log file*
- file to log to
- --log-level *num*
- debug log level. Default is 30 (extremely verbose), 20 is very verbose, 10 is verbose, and 1 is not very verbose.
- --deep
- deep scrub/repair (read and validate object data, not just metadata)
- --allocator *name*
- Useful for free-dump and free-score actions. Selects allocator(s).
- --resharding-ctrl *control string*
- Provides control over resharding process. Specifies how often refresh RocksDB iterator, and how large should commit batch be before committing to RocksDB. Option format is: <iterator_refresh_bytes>/<iterator_refresh_keys>/<batch_commit_bytes>/<batch_commit_keys> Default: 10000000/10000/1000000/1000
DEVICE LABELS¶
Every BlueStore block device has a single block label at the beginning of the device. You can dump the contents of the label with:
ceph-bluestore-tool show-label --dev *device*
The main device will have a lot of metadata, including information that used to be stored in small files in the OSD data directory. The auxiliary devices (db and wal) will only have the minimum required fields (OSD UUID, size, device type, birth time).
OSD DIRECTORY PRIMING¶
You can generate the content for an OSD data directory that can start up a BlueStore OSD with the prime-osd-dir command:
ceph-bluestore-tool prime-osd-dir --dev *main device* --path /var/lib/ceph/osd/ceph-*id*
BLUEFS LOG RESCUE¶
Some versions of BlueStore were susceptible to BlueFS log growing extremaly large - beyond the point of making booting OSD impossible. This state is indicated by booting that takes very long and fails in _replay function.
- This can be fixed by::
- ceph-bluestore-tool fsck --path osd path --bluefs_replay_recovery=true
- It is advised to first check if rescue process would be successfull::
- ceph-bluestore-tool fsck --path osd path --bluefs_replay_recovery=true --bluefs_replay_recovery_disable_compact=true
If above fsck is successful fix procedure can be applied.
AVAILABILITY¶
ceph-bluestore-tool is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to the Ceph documentation at http://ceph.com/docs for more information.
SEE ALSO¶
COPYRIGHT¶
2010-2024, Inktank Storage, Inc. and contributors. Licensed under Creative Commons Attribution Share Alike 3.0 (CC-BY-SA-3.0)
April 10, 2024 | dev |