Solution to VM missing from Virtual Machine Manager

I restarted a standalone KVM host running RHEL6 and when I opened Virtual Machine Manager, my guest virtual machine wasn't listed at all.

Eek!

Turns out I had been messing around with the XML definition in /etc/libvirt/qemu/machinename.xml (I know, I know...consider my hand slapped) and broke it. Fortunately /var/log/libvirt/libvirtd.log was kind enough to let me know what went wrong:

error : catchXMLError:653 : at line 33: Opening and ending tag mismatch: source line 32 and disk

I freaked out again after I asked virsh to list my virtual machines and the domain was not listed -- it was an empty list:

# virsh list
Id Name                 State
----------------------------------

Turns out it's just inactive, since it couldn't start up, since I broke the configuration file.

# virsh list --inactive
Id Name                 State
----------------------------------
  - machinename.example.com shut off

I repaired the XML configuration and made sure the the VM was set to autostart on reboot:

# virsh autostart machinename.example.com
Domain machinename.example.com marked as autostarted

Adrenaline level dropping fast.

Topic: 

Using RHEL6 to share RAID volume via iSCSI: the Mystery of the Missing LUN

My use case was pretty simple. I wanted to share out a raw device via iSCSI to a nearby host on the 172.16.2.x network.

In addition to a minimal Red Hat Enterprise Linux 6 (or equivalent) install, a few packages are needed:

# yum install -y iscsi-initiator-utils scsi-target-utils sg3_utils lsof

I knew the device I wanted to share was /dev/sdb by looking at the output from dmesg:

# dmesg | grep sd
...
sd 0:0:1:0: [sdb] 7812456448 512-byte logical blocks: (3.99 TB/3.63 TiB)
...

The target definition in /etc/tgt/targets.conf was simple as well. Isn't everything simple after you've beaten your head against a wall for hours trying to get it working? The following configuration defines one target with one LUN shared as a raw device which may only be connected to by IP address 172.16.2.2:


Starting up the tgtd service and turning it on permanently led to wonderment and success:

# service tgtd start
# chkconfig tgtd on

Depending on your configuration, you may need to open port 3260 in your firewall.

However, after a reboot, only the controller showed up (as LUN 0). LUN 1 had disappeared!

# tgtadm --lld iscsi --op show --mode target
Target 1: iqn.2012-04.com.example:sharename
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: IET     00010000
            SCSI SN: beaf10
            Size: 0 MB, Block size: 1
            Online: Yes
            Removable media: No
            Readonly: No
            Backing store type: null
            Backing store path: None
            Backing store flags:
    Account information:
    ACL information:
        172.16.2.2

Why is LUN 1 not showing up? Telling tgt-admin to reparse targets.conf in verbose mode leads to the reason:

# tgt-admin --update ALL -v
# Removing target: iqn.2012-04.com.example:sharename
tgtadm -C 0 --mode target --op delete --tid=1

# Adding target: iqn.2012-04.com.example:sharename
tgtadm -C 0 --lld iscsi --op new --mode target --tid 1 -T iqn.2012-04.com.example:sharename
# Device /dev/sdb is used by the system (mounted, used by swap?).
# Skipping device /dev/sdb - it is in use.
# You can override it with --force or 'allow-in-use yes' config option.
# Note - do so only if you know what you're doing, you may damage your data.

What? I assure you that /dev/sdb is NOT in use by the system. Show me mounts:

# mount

Nope, /dev/sdb does not appear anywhere in the output. Show me a list of open files on /dev/sdb:

# lsof | grep sdb

Nothing. Show me active swap:

# swapon -s

Nothing! Finally, choirboy in the #rhel irc channel pointed me to the answer: you must create a filter in /etc/lvm/lvm.conf so that LVM leaves the device alone. Appropriate section of lvm.conf, showing new filter:

    # By default we accept every block device:
    #filter = [ "a/.*/" ]
    # Every block device except /dev/sdb, that is.
    filter = [ "r|/dev/sdb|" ]

After a restart, LUN 1 persists and the world is once again a happy place to be:

# tgtadm --lld iscsi --op show --mode target
Target 1: iqn.2012-04.com.example:sharename
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: IET     00010000
            SCSI SN: beaf10
            Size: 0 MB, Block size: 1
            Online: Yes
            Removable media: No
            Readonly: No
            Backing store type: null
            Backing store path: None
            Backing store flags:
        LUN: 1
            Type: disk
            SCSI ID: IET     00010001
            SCSI SN: 9206CBBG71194900CF07
            Size: 3999978 MB, Block size: 512
            Online: Yes
            Removable media: No
            Readonly: No
            Backing store type: rdwr
            Backing store path: /dev/sdb
            Backing store flags:
    Account information:
    ACL information:
        172.16.2.2

Edit: mutipathd may also be the culprit. Take a look at the world-wide identifier (WWID) of /dev/sdb:

scsi_id -g -u /dev/sdb
36a4badb051c18d0018e462537ac943d6

Is that identifier also listed under /dev/mapper ?

If so, add

find_multipaths         yes

To the defaults section of /etc/multipath.conf, remove the matching line from /etc/multipath/wwids, and restart the multipath daemon:

service multipathd restart

RHEV 3.0 Firewall Annotated iptables Configuration for Netfilter

When Red Hat Enterprise Virtualization Manager for Servers is installed, it offers to configure iptables for you:

...
Firewall ports need to be opened.
You can let the installer configure iptables automatically overriding the current configuration. The old configuration will be backed up.
Alternately you can configure the firewall later using an example iptables file found under /usr/share/rhevm/conf/iptables.example
...

Here's an annotated version of what the RHEVM installer will give you:

# ssh
-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT

# XBAP clients for Administration Portal
-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 8006 -j ACCEPT
-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 8007 -j ACCEPT
-A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 8008 -j ACCEPT

# Web interface to Administrator Portal
-A RH-Firewall-1-INPUT -m state --state NEW -p tcp --dport 8080 -j ACCEPT
# Web interface to Administrator Portal (SSL)
-A RH-Firewall-1-INPUT -m state --state NEW -p tcp --dport 8443 -j ACCEPT

# Portmapper (rpcbind on RHEL6)
-A RH-Firewall-1-INPUT -m state --state NEW -p udp --dport 111 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -p tcp --dport 111 -j ACCEPT

# mountd; NFS MOUNTD_PORT (defined in /etc/sysconfig/nfs)
-A RH-Firewall-1-INPUT -m state --state NEW -p udp --dport 892 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -p tcp --dport 892 -j ACCEPT

# rquotad; NFS RQUOTAD_PORT (defined in /etc/sysconfig/nfs)
-A RH-Firewall-1-INPUT -m state --state NEW -p udp --dport 875 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -p tcp --dport 875 -j ACCEPT

# NFS STATD_PORT (defined in /etc/sysconfig/nfs)
-A RH-Firewall-1-INPUT -m state --state NEW -p udp --dport 662 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -p tcp --dport 662 -j ACCEPT

# nfsd for nfs and nfs_acl
-A RH-Firewall-1-INPUT -m state --state NEW -p tcp --dport 2049 -j ACCEPT

# nlockmgr; NFS LOCKD_TCPPORT (defined in /etc/sysconfig/nfs)
-A RH-Firewall-1-INPUT -m state --state NEW -p tcp --dport 32803 -j ACCEPT

# NFS LOCKD_UDPPORT (defined in /etc/sysconfig/nfs)
-A RH-Firewall-1-INPUT -m state --state NEW -p udp --dport 32769 -j ACCEPT

Topic: 

Why Your KVM Network Bridge Isn't Working

You're trying to get libvirt and KVM working on Red Hat Enterprise Linux 6 or CentOS 6, or maybe even Scientific Linux 6. But it's not going well.

You wanted your VMs to have full access to the network and you've discovered that virbr0 doesn't do that. Finally you stumbled upon the way to do it by Creating a Network Bridge using your primary interface. And yet, something just ain't right.

You've verified that bridge-utils is in fact present:

# rpm -q bridge-utils
bridge-utils-1.2-9.el6.x86_64

Telltale signs are this message during network startup:

Device bridge0 does not seem to be present, delaying initialization.

And the following entries in /var/log/messages:

/sys/devices/virtual/net/bridge0: couldn't determine device driver; ignoring...

and

ifcfg-rh: parsing /etc/sysconfig/network-scripts/ifcfg-bridge0 ...
NetworkManager[1802]: ifcfg-rh: error: Bridge connections are not yet supported

You've stared at your /etc/sysconfig/network-scripts/ifcfg-bridge0 file until your eyes hurt, yet nothing seems to be wrong:

DEVICE="bridge0"
TYPE="bridge"
BOOTPROTO="none"
ONBOOT="yes"
IPADDR="203.0.113.2"
NETMASK="255.255.255.0"
DELAY=0
GATEWAY="203.0.113.254"
DNS1="x.x.x.x"

I'm here to tell you: you forgot to capitalize the word Bridge in your TYPE entry.

DEVICE="bridge0"
TYPE="Bridge"
BOOTPROTO="none"
ONBOOT="yes"
IPADDR="203.0.113.2"
NETMASK="255.255.255.0"
DELAY=0
GATEWAY="203.0.113.254"
DNS1="x.x.x.x"

Have a nice day!

Topic: 

Pages

Subscribe to SysArchitects RSS