This bug was fixed in the package linux - 5.8.0-36.40+21.04.1
---------------
linux (5.8.0-36.40+21.04.1) hirsute; urgency=medium
* Packaging resync (LP: #1786013)
- update dkms package versions
[ Ubuntu: 5.8.0-36.40 ]
* debian/scripts/file-downloader does not handle positive failures correctly
(LP: #1878897)
- [Packaging] file-downloader not handling positive failures correctly
[ Ubuntu: 5.8.0-35.39 ]
* Packaging resync (LP: #1786013)
- update dkms package versions
* CVE-2021-1052 // CVE-2021-1053
- [Packaging] NVIDIA -- Add the NVIDIA 460 driver
-- Kleber Sacilotto de Souza <kleber.souza@canonical.com> Thu, 07 Jan
2021 11:57:30 +0100
** Changed in: linux (Ubuntu)
Status: Confirmed => Fix Released
** CVE added: https://cve.mitre.org/cgi-bin/cvename.cgi?name=2021-1052
** CVE added: https://cve.mitre.org/cgi-bin/cvename.cgi?name=2021-1053
--
You received this bug notification because you are subscribed to linux
in Ubuntu.
Matching subscriptions: Bgg, Bmail, Nb
https://bugs.launchpad.net/bugs/1907262
Title:
raid10: discard leads to corrupted file system
Status in linux package in Ubuntu:
Fix Released
Status in linux source package in Trusty:
Invalid
Status in linux source package in Xenial:
Invalid
Status in linux source package in Bionic:
Fix Released
Status in linux source package in Focal:
Fix Released
Status in linux source package in Groovy:
Fix Released
Bug description:
Seems to be closely related to
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1896578
After updating the Ubuntu 18.04 kernel from 4.15.0-124 to 4.15.0-126
the fstrim command triggered by fstrim.timer causes a severe number of
mismatches between two RAID10 component devices.
This bug affects several machines in our company with different HW
configurations (All using ECC RAM). Both, NVMe and SATA SSDs are
affected.
How to reproduce:
- Create a RAID10 LVM and filesystem on two SSDs
mdadm -C -v -l10 -n2 -N "lv-raid" -R /dev/md0 /dev/nvme0n1p2 /dev/nvme1n1p2
pvcreate -ff -y /dev/md0
vgcreate -f -y VolGroup /dev/md0
lvcreate -n root -L 100G -ay -y VolGroup
mkfs.ext4 /dev/VolGroup/root
mount /dev/VolGroup/root /mnt
- Write some data, sync and delete it
dd if=/dev/zero of=/mnt/data.raw bs=4K count=1M
sync
rm /mnt/data.raw
- Check the RAID device
echo check >/sys/block/md0/md/sync_action
- After finishing (see /proc/mdstat), check the mismatch_cnt (should be 0):
cat /sys/block/md0/md/mismatch_cnt
- Trigger the bug
fstrim /mnt
- Re-Check the RAID device
echo check >/sys/block/md0/md/sync_action
- After finishing (see /proc/mdstat), check the mismatch_cnt (probably in the range of N*10000):
cat /sys/block/md0/md/mismatch_cnt
After investigating this issue on several machines it *seems* that the
first drive does the trim correctly while the second one goes wild. At
least the number and severity of errors found by a USB stick live
session fsck.ext4 suggests this.
To perform the single drive evaluation the RAID10 was started using a single drive at once:
mdadm --assemble /dev/md127 /dev/nvme0n1p2
mdadm --run /dev/md127
fsck.ext4 -n -f /dev/VolGroup/root
vgchange -a n /dev/VolGroup
mdadm --stop /dev/md127
mdadm --assemble /dev/md127 /dev/nvme1n1p2
mdadm --run /dev/md127
fsck.ext4 -n -f /dev/VolGroup/root
When starting these fscks without -n, on the first device it seems the
directory structure is OK while on the second device there is only the
lost+found folder left.
Side-note: Another machine using HWE kernel 5.4.0-56 (after using -53
before) seems to have a quite similar issue.
Unfortunately the risk/regression assessment in the aforementioned bug
is not complete: the workaround only mitigates the issues during FS
creation. This bug on the other hand is triggered by a weekly service
(fstrim) causing severe file system corruption.
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1907262/+subscriptions
Комментариев нет:
Отправить комментарий