Discussion:
[libvirt] W/ commit 9e159b5: error from service: CreateMachine: Machine 'qemu-foo' already exists
Kashyap Chamarthy
2014-09-27 14:59:28 UTC
Permalink
On Fedora 20 with:

libvirt RPMs built from git commit: 9e159b521dbf18c6da6976e54e29c8553f831eb6
qemu-2.1.1-1.fc22.x86_64

Just start a guest:
-----
$ virsh start ostack-controller
error: Failed to start domain ostack-controller
error: error from service: CreateMachine: Machine 'qemu-ostack-controller' already exists
-----


QEMU CLI:
-----
$ tail /var/log/libvirt/qemu/ostack-controller.log
2014-09-27 14:01:40.544+0000: starting up
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none /usr/bin/qemu-kvm -name ostack-controller -S -machine pc-i440fx-1.6,accel=kvm,usb=off -cpu host -m 2048 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid 24099854-24b2-4664-b425-a288114a43df -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/ostack-controller.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/libvirt/images/ostack-controller.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x3,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=24,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:7b:f3:a2,bus=pci.0,addr=0x2 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 -msg timestamp=on
Domain id=8 is tainted: host-cpu
2014-09-27 14:01:40.550+0000: shutting down
root@~$
-----

libvirtd is running:
-----
$ systemctl status libvirtd
libvirtd.service - Virtualization daemon
Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled)
Active: active (running) since Sat 2014-09-27 19:32:24 IST; 47min ago
Docs: man:libvirtd(8)
http://libvirt.org
Main PID: 4569 (libvirtd)
CGroup: /system.slice/libvirtd.service
├─1715 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --dhcp-script=/usr/libexec/libvirt_leaseshelper
├─1716 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --dhcp-script=/usr/libexec/libvirt_leaseshelper
├─1762 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/openstackvms.conf --dhcp-script=/usr/libexec/libvirt_leaseshelper
├─1763 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/openstackvms.conf --dhcp-script=/usr/libexec/libvirt_leaseshelper
└─4569 /usr/sbin/libvirtd

Sep 27 19:32:40 tesla libvirtd[4569]: 2014-09-27 14:02:40.887+0000: 4682: debug : virFileClose:99 : Closed fd 15
Sep 27 19:32:40 tesla libvirtd[4569]: 2014-09-27 14:02:40.887+0000: 4682: debug : virFileClose:99 : Closed fd 16
Sep 27 19:32:40 tesla libvirtd[4569]: 2014-09-27 14:02:40.887+0000: 4682: debug : virFileClose:99 : Closed fd 17
Sep 27 19:32:40 tesla libvirtd[4569]: 2014-09-27 14:02:40.887+0000: 4682: debug : virFileClose:99 : Closed fd 18
Sep 27 19:32:40 tesla libvirtd[4569]: 2014-09-27 14:02:40.887+0000: 4682: debug : virFileClose:99 : Closed fd 19
Sep 27 19:32:40 tesla libvirtd[4569]: 2014-09-27 14:02:40.887+0000: 4682: debug : virFileClose:99 : Closed fd 20
Sep 27 19:32:40 tesla libvirtd[4569]: 2014-09-27 14:02:40.887+0000: 4682: debug : virFileClose:99 : Closed fd 21
Sep 27 19:32:40 tesla libvirtd[4569]: 2014-09-27 14:02:40.887+0000: 4682: debug : virFileClose:99 : Closed fd 22
Sep 27 19:32:40 tesla libvirtd[4569]: 2014-09-27 14:02:40.887+0000: 4682: debug : virFileClose:99 : Closed fd 26
Sep 27 19:32:40 tesla libvirtd[4569]: 2014-09-27 14:02:40.887+0000: 4682: debug : virFileClose:99 : Closed fd 29
-----
Guido Günther
2014-09-28 08:57:11 UTC
Permalink
Post by Kashyap Chamarthy
libvirt RPMs built from git commit: 9e159b521dbf18c6da6976e54e29c8553f831eb6
qemu-2.1.1-1.fc22.x86_64
-----
$ virsh start ostack-controller
error: Failed to start domain ostack-controller
error: error from service: CreateMachine: Machine 'qemu-ostack-controller' already exists
-----
-----
$ tail /var/log/libvirt/qemu/ostack-controller.log
2014-09-27 14:01:40.544+0000: starting up
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none /usr/bin/qemu-kvm -name ostack-controller -S -machine pc-i440fx-1.6,accel=kvm,usb=off -cpu host -m 2048 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid 24099854-24b2-4664-b425-a288114a43df -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/ostack-controller.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/libvirt/images/ostack-controller.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x3,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=24,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:7b:f3:a2,bus=pci.0,addr=0x2 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 -msg timestamp=on
Domain id=8 is tainted: host-cpu
2014-09-27 14:01:40.550+0000: shutting down
-----
-----
$ systemctl status libvirtd
libvirtd.service - Virtualization daemon
Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled)
Active: active (running) since Sat 2014-09-27 19:32:24 IST; 47min ago
Docs: man:libvirtd(8)
http://libvirt.org
Main PID: 4569 (libvirtd)
CGroup: /system.slice/libvirtd.service
├─1715 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --dhcp-script=/usr/libexec/libvirt_leaseshelper
├─1716 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --dhcp-script=/usr/libexec/libvirt_leaseshelper
├─1762 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/openstackvms.conf --dhcp-script=/usr/libexec/libvirt_leaseshelper
├─1763 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/openstackvms.conf --dhcp-script=/usr/libexec/libvirt_leaseshelper
└─4569 /usr/sbin/libvirtd
Sep 27 19:32:40 tesla libvirtd[4569]: 2014-09-27 14:02:40.887+0000: 4682: debug : virFileClose:99 : Closed fd 15
Sep 27 19:32:40 tesla libvirtd[4569]: 2014-09-27 14:02:40.887+0000: 4682: debug : virFileClose:99 : Closed fd 16
Sep 27 19:32:40 tesla libvirtd[4569]: 2014-09-27 14:02:40.887+0000: 4682: debug : virFileClose:99 : Closed fd 17
Sep 27 19:32:40 tesla libvirtd[4569]: 2014-09-27 14:02:40.887+0000: 4682: debug : virFileClose:99 : Closed fd 18
Sep 27 19:32:40 tesla libvirtd[4569]: 2014-09-27 14:02:40.887+0000: 4682: debug : virFileClose:99 : Closed fd 19
Sep 27 19:32:40 tesla libvirtd[4569]: 2014-09-27 14:02:40.887+0000: 4682: debug : virFileClose:99 : Closed fd 20
Sep 27 19:32:40 tesla libvirtd[4569]: 2014-09-27 14:02:40.887+0000: 4682: debug : virFileClose:99 : Closed fd 21
Sep 27 19:32:40 tesla libvirtd[4569]: 2014-09-27 14:02:40.887+0000: 4682: debug : virFileClose:99 : Closed fd 22
Sep 27 19:32:40 tesla libvirtd[4569]: 2014-09-27 14:02:40.887+0000: 4682: debug : virFileClose:99 : Closed fd 26
Sep 27 19:32:40 tesla libvirtd[4569]: 2014-09-27 14:02:40.887+0000: 4682: debug : virFileClose:99 : Closed fd 29
-----
Kashyap Chamarthy
2014-09-29 01:42:29 UTC
Permalink
Post by Kashyap Chamarthy
libvirt RPMs built from git commit: 9e159b521dbf18c6da6976e54e29c8553f831eb6
qemu-2.1.1-1.fc22.x86_64
-----
$ virsh start ostack-controller
error: Failed to start domain ostack-controller
error: error from service: CreateMachine: Machine 'qemu-ostack-controller' already exists
-----
[. . .]
Could you check if the patch from
https://www.redhat.com/archives/libvir-list/2014-September/msg01549.html
helps?
Just tested with the above patch - it doesn't seem to help.

Applied it:

$ git log | head -5
commit da0a33c3f286ff69b910bc2a84e9ec3a844e3ce8
Author: Guido Günther <***@sigxcpu.org>
Date: Thu Sep 25 14:30:42 2014 +0200

qemu: use systemd's TerminateMachine to kill all processes

And tested (by building RPMs, restarting libvirtd and attempting to
start the guest again, still no luck:

$ sudo systemctl restart libvirtd
$ sudo systemctl daemon-reload

$ virsh start ostack-controller
error: Failed to start domain ostack-controller
error: error from service: CreateMachine: Machine 'qemu-ostack-controller' already exists

That's the version of systemd I'm using: systemd-210-7.fc21.x86_64.
--
/kashyap
Guido Günther
2014-09-29 06:17:11 UTC
Permalink
Post by Kashyap Chamarthy
$ git log | head -5
commit da0a33c3f286ff69b910bc2a84e9ec3a844e3ce8
Date: Thu Sep 25 14:30:42 2014 +0200
qemu: use systemd's TerminateMachine to kill all processes
And tested (by building RPMs, restarting libvirtd and attempting to
$ sudo systemctl restart libvirtd
$ sudo systemctl daemon-reload
Did you reboot in between? The patch fixes the _stopping_ of the
vm. Once the vm is in the state you describe the patch won't help.

An alternative to rebooting is to remove related processes manually, see:

http://bugs.debian.org/761521

Cheers,
-- Guido
Post by Kashyap Chamarthy
$ virsh start ostack-controller
error: Failed to start domain ostack-controller
error: error from service: CreateMachine: Machine 'qemu-ostack-controller' already exists
That's the version of systemd I'm using: systemd-210-7.fc21.x86_64.
--
/kashyap
Kashyap Chamarthy
2014-09-29 09:18:34 UTC
Permalink
Post by Guido Günther
Post by Kashyap Chamarthy
$ git log | head -5
commit da0a33c3f286ff69b910bc2a84e9ec3a844e3ce8
Date: Thu Sep 25 14:30:42 2014 +0200
qemu: use systemd's TerminateMachine to kill all processes
And tested (by building RPMs, restarting libvirtd and attempting to
$ sudo systemctl restart libvirtd
$ sudo systemctl daemon-reload
Did you reboot in between?
No, I didn't. I realized rebooting might help solve the issue as this
was related to systemd, but didn't wanted to, as it's my main laptop,
and have several other things that I'm processing.
Post by Guido Günther
The patch fixes the _stopping_ of the vm. Once the vm is in the state
you describe the patch won't help.
Ah.
Post by Guido Günther
http://bugs.debian.org/761521
I don't see x2d<$vmname>.scope under machine.slice (mostly as the
guests are all off, but I can't start them anyhow):

$ tree /sys/fs/cgroup/cpuset/machine.slice
/sys/fs/cgroup/cpuset/machine.slice
├── cgroup.clone_children
├── cgroup.procs
├── cpuset.cpu_exclusive
├── cpuset.cpus
├── cpuset.mem_exclusive
├── cpuset.mem_hardwall
├── cpuset.memory_migrate
├── cpuset.memory_pressure
├── cpuset.memory_spread_page
├── cpuset.memory_spread_slab
├── cpuset.mems
├── cpuset.sched_load_balance
├── cpuset.sched_relax_domain_level
├── notify_on_release
└── tasks

And, the 'tasks' file is empty:

$ cat /sys/fs/cgroup/cpuset/machine.slice/tasks

Is there anything else that I can clean up manually here?

Thanks for the response, Guido.

--
/kashyap
Kashyap Chamarthy
2014-09-29 09:29:59 UTC
Permalink
[. . .]
Post by Kashyap Chamarthy
Post by Guido Günther
http://bugs.debian.org/761521
I don't see x2d<$vmname>.scope under machine.slice (mostly as the
$ tree /sys/fs/cgroup/cpuset/machine.slice
/sys/fs/cgroup/cpuset/machine.slice
├── cgroup.clone_children
├── cgroup.procs
├── cpuset.cpu_exclusive
├── cpuset.cpus
├── cpuset.mem_exclusive
├── cpuset.mem_hardwall
├── cpuset.memory_migrate
├── cpuset.memory_pressure
├── cpuset.memory_spread_page
├── cpuset.memory_spread_slab
├── cpuset.mems
├── cpuset.sched_load_balance
├── cpuset.sched_relax_domain_level
├── notify_on_release
└── tasks
$ cat /sys/fs/cgroup/cpuset/machine.slice/tasks
Is there anything else that I can clean up manually here?
I just quickly rebooted the machine, as I was testing something else,
and stumbled on this. But I must say, that's a very unhelpful error
message from systemd.

Thanks again, I now know what to look for.
--
/kashyap
Guido Günther
2014-09-29 17:29:23 UTC
Permalink
Post by Kashyap Chamarthy
Post by Guido Günther
Post by Kashyap Chamarthy
$ git log | head -5
commit da0a33c3f286ff69b910bc2a84e9ec3a844e3ce8
Date: Thu Sep 25 14:30:42 2014 +0200
qemu: use systemd's TerminateMachine to kill all processes
And tested (by building RPMs, restarting libvirtd and attempting to
$ sudo systemctl restart libvirtd
$ sudo systemctl daemon-reload
Did you reboot in between?
No, I didn't. I realized rebooting might help solve the issue as this
was related to systemd, but didn't wanted to, as it's my main laptop,
and have several other things that I'm processing.
Post by Guido Günther
The patch fixes the _stopping_ of the vm. Once the vm is in the state
you describe the patch won't help.
Ah.
Post by Guido Günther
http://bugs.debian.org/761521
I don't see x2d<$vmname>.scope under machine.slice (mostly as the
$ tree /sys/fs/cgroup/cpuset/machine.slice
Please check /sys/fs/cgroup/systemd/machine.slice/ . Does machinectl
list anything?
-- Guido
Kashyap Chamarthy
2014-09-29 17:41:17 UTC
Permalink
Post by Guido Günther
Post by Kashyap Chamarthy
Post by Guido Günther
Post by Kashyap Chamarthy
$ git log | head -5
commit da0a33c3f286ff69b910bc2a84e9ec3a844e3ce8
Date: Thu Sep 25 14:30:42 2014 +0200
qemu: use systemd's TerminateMachine to kill all processes
And tested (by building RPMs, restarting libvirtd and attempting to
$ sudo systemctl restart libvirtd
$ sudo systemctl daemon-reload
Did you reboot in between?
No, I didn't. I realized rebooting might help solve the issue as this
was related to systemd, but didn't wanted to, as it's my main laptop,
and have several other things that I'm processing.
Post by Guido Günther
The patch fixes the _stopping_ of the vm. Once the vm is in the state
you describe the patch won't help.
Ah.
Post by Guido Günther
http://bugs.debian.org/761521
I don't see x2d<$vmname>.scope under machine.slice (mostly as the
$ tree /sys/fs/cgroup/cpuset/machine.slice
Please check /sys/fs/cgroup/systemd/machine.slice/
Ah, I remember checking it there too, but I don't recall seeing it. Now
the system is not in that state anymore as I rebooted it.
Post by Guido Günther
. Does machinectl list anything?
I didn't invoke this when the system was in a bad state. But, after
rebooting and when I start a guest, yes.

I'm running libvirt RPMs built from the patch that you noted on this
thread. I'll keep any eye on it.

Thanks.
--
/kashyap
Loading...