This is not a zfs related bug per se, it's more of a kernel related, so
I'm re-assigned the bug
** Package changed: zfs-linux (Ubuntu) => linux (Ubuntu)
--
You received this bug notification because you are subscribed to linux
in Ubuntu.
Matching subscriptions: Bgg, Bmail, Nb
https://bugs.launchpad.net/bugs/1801349
Title:
zpool create -f lxd /dev/vdb fails on cosmic (18.10) -- func27
Status in OpenStack LXD Charm:
New
Status in linux package in Ubuntu:
In Progress
Bug description:
Test: tests/gate-basic-cosmic-rocky
As part of the config, the lxd charm creates a pool device depending
on the config. The test config is:
lxd_config = {
'block-devices': '/dev/vdb',
'ephemeral-unmount': '/mnt',
'storage-type': 'zfs',
'overwrite': True
}
The config drive is normally mounted on /mnt, and the lxd charm
umounts it as part of the start up. The /etc/fstab on the unit is:
# cat /etc/fstab
LABEL=cloudimg-rootfs / ext4 defaults 0 0
LABEL=UEFI /boot/efi vfat defaults 0 0
/dev/vdb /mnt auto defaults,nofail,x-systemd.requires=cloud-init.service,comment=cloudconfig 0 2
/dev/vdc none swap sw,comment=cloudconfig 0 0
However, even after umount-ing the /mnt off of /dev/vdb, the zpool create command still fails:
# zpool create -f lxd /dev/vdb
/dev/vdb is in use and contains a unknown filesystem.
If the /etc/fstab is edited so that /dev/vdb is *never* mounted and then rebooted, then the zpool create command succeeds:
# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
lxd 14.9G 106K 14.9G - 0% 0% 1.00x ONLINE -
# zpool status lxd
pool: lxd
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
lxd ONLINE 0 0 0
vdb ONLINE 0 0 0
errors: No known data errors
Something odd is going on with cosmic (18.10) and the combination of
lxd, zfs and the kernel
lxd version: 3.6
zfsutils-linux/cosmic,now 0.7.9-3ubuntu6
Linux: 4.18.0-10-generic
To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-lxd/+bug/1801349/+subscriptions
Комментариев нет:
Отправить комментарий