English
Ask Your Question
0

LVM issue with lvremove (Logical volume contains a filesystem in use.)

asked 2012-11-12 12:35:54 +0000

NeilB gravatar image

updated 2012-11-12 17:00:28 +0000

Hi Guys,

I have a strange LVM issue while removing an LVM snapshot.

Im using LVM Snapshots to create backups of virtualbox VDI's with bacula.

Bacula runs before and after backups this script to take a lvm snapshot, mount, unmount and release the snapshot.

The Script has been created the lv_vms_backup volume and successfully mounted (read-only) to /mnt/lv_vms_backup. The only process which accessed this mountpoint was bacula-fd. The volume is not mounted.

lvremove -f /dev/vg_virtualbox/lv_vms_backup

Logical volume vg_virtualbox/lv_vms_backup contains a filesystem in use.

grep lv_vms_backup /proc/mounts

shows nothing.

lsof |grep lv_vms_backup

shows nothing.

lsof +D /mnt/lv_vms_backup

shows nothing.

Mounting the volume is also weird:

mount /dev/mapper/vg_virtualbox-lv_vms_backup /mnt/lv_vms_backup

mount: /dev/mapper/vg_virtualbox-lv_vms_backup is already mounted or /mnt/lv_vms_backup busy

Mounting read-only works fine.

lvdisplay
--- Logical volume ---
LV Path                /dev/vg_virtualbox/lv_swap
LV Name                lv_swap
VG Name                vg_virtualbox
LV UUID                3hsKCf-TOj5-h94P-DVzG-nTqZ-hn40-q3jUEU
LV Write Access        read/write
LV Creation host, time virtualbox, 2012-11-07 22:35:15 +0100
LV Status              available
# open                 2
LV Size                2.00 GiB
Current LE             64
Segments               1
Allocation             inherit
Read ahead sectors     auto
- currently set to     256
Block device           253:0

--- Logical volume ---
LV Path                /dev/vg_virtualbox/lv_root
LV Name                lv_root
VG Name                vg_virtualbox
LV UUID                iTyqrl-fvEq-9LLR-JO96-rdxX-hW4V-n40PLB
LV Write Access        read/write
LV Creation host, time virtualbox, 2012-11-07 22:35:16 +0100
LV Status              available
# open                 1
LV Size                9.78 GiB
Current LE             313
Segments               1
Allocation             inherit
Read ahead sectors     auto
- currently set to     256
Block device           253:1

--- Logical volume ---
LV Path                /dev/vg_virtualbox/lv_vms
LV Name                lv_vms
VG Name                vg_virtualbox
LV UUID                oyVk5m-4UUk-ikUT-ojg5-yxLg-M1Vo-1HXpO4
LV Write Access        read/write
LV Creation host, time virtualbox, 2012-11-08 00:02:50 +0100
LV snapshot status     source of
                      lv_vms_backup [active]
LV Status              available
# open                 1
LV Size                2.72 TiB
Current LE             89203
Segments               1
Allocation             inherit
Read ahead sectors     auto
- currently set to     256
Block device           253:2

--- Logical volume ---
LV Path                /dev/vg_virtualbox/lv_vms_backup
LV Name                lv_vms_backup
VG Name                vg_virtualbox
LV UUID                BpcrbS-RFGO-t2b2-xD1H-2dgF-IoJ0-qfaRll
LV Write Access        read/write
LV Creation host, time virtualbox, 2012-11-09 17:17:59 +0100
LV snapshot status     active destination for lv_vms
LV Status              available
# open                 1
LV Size                2.72 TiB
Current LE             89203
COW-table size         2.72 TiB
COW-table LE           89204
Allocated to snapshot  1.48%
Snapshot chunk size    4.00 KiB
Segments               1
Allocation             inherit
Read ahead sectors     auto
- currently set to     256
Block device           253:3

Any idea?

Greetings Neil

edit retag flag offensive close delete

2 Answers

Sort by ยป oldest newest most voted
1

answered 2013-01-15 22:57:22 +0000

inveratulo gravatar image

I had a similar problem, again with no real indication that anything remained in use same as NeilB pointed out in his original post. For me, I neglected to remember that one of the directories on this mounted partition had been configured in /etc/exports. I commented out the offending line, restarted NFS, and then I could umount and lvremove properly. I suspect there may be other filesystem sharing services that exhibit similar behavior.

This thread was the top hit on google for this little problem so I figured if anyone else stumbled across it and like me, couldn't reboot, this would be useful to look at.

edit flag offensive delete publish link more

Comments

Thanks! Your answer pointed me in the "right" direction. I do not have any NFS exports, but randomly stoping services helped ;). Had this issue two times since my post. Once an restart of apache2 did the job. Last time i had to stop the ntpd... weird :(

NeilB ( 2013-06-20 12:21:52 +0000 )edit
0

answered 2014-01-31 09:04:28 +0000

This is the first result in google so here is what happened to me.

lvremove -f /dev/vg_data/backup

File descriptor 7 (pipe:[835434706]) leaked on lvremove invocation. Parent PID 31676: bash /dev/vg_data/backup: read failed after 0 of 4096 at 1073737564160: Input/output error /dev/vg_data/backup: read failed after 0 of 4096 at 1073737621504: Input/output error /dev/vg_data/backup: read failed after 0 of 4096 at 0: Input/output error /dev/vg_data/backup: read failed after 0 of 4096 at 4096: Input/output error Logical volume vg_data/backup contains a filesystem in use.

Ok so it is in use, fine, as you said above, none of those commands yielded result, BUT:

# fuser -kuc /dev/vg_data/backup

/dev/vg_data/backup: 2302c(root)

So, there is a process using it, even though it didn't show anywhere, so let's try to kill it:

kill -9 2302 bash: kill: (2302) - No such process

Ok, it reports there is no such process, but was there? :

# fuser -kuc /dev/vg_data/backup

Reports no more process sitting on it. So let's try again that remove:

lvremove -f /dev/vg_data/backup File descriptor 7 (pipe:[835434706]) leaked on lvremove invocation. Parent PID 31676: bash /dev/vg_data/backup: read failed after 0 of 4096 at 1073737564160: Input/output error /dev/vg_data/backup: read failed after 0 of 4096 at 1073737621504: Input/output error /dev/vg_data/backup: read failed after 0 of 4096 at 0: Input/output error /dev/vg_data/backup: read failed after 0 of 4096 at 4096: Input/output error Logical volume "backup" successfully removed

That did the trick. So if someone feels like reporting it to the developers, feel free, this is not the first case, random machines do this to me here and there, but this time I didn't give it up. Apparently the PID existed in some internal lists even after its death. (never mind the IO errors, that's normal the snapshot ran out of allocated space a few days ago :D )

edit flag offensive delete publish link more

Your answer

Please start posting your answer anonymously - your answer will be saved within the current session and published after you log in or create a new account. Please try to give a substantial answer, for discussions, please use comments and please do remember to vote (after you log in)!

Add answer

[hide preview]

Use your votes!

  • Use the 30 daily voting points that you get!
  • Up-vote well framed questions that provide enough information to enable people provide answers.
  • Thank your helpers by up-voting their comments and answers to your questions.
  • Down-voting might cost you karma, but you should consider doing so for incorrect or clearly detrimental questions and answers.

Question tools

Follow
1 follower

Stats

Asked: 2012-11-12 12:35:54 +0000

Seen: 4,457 times

Last updated: Jan 31