Earlier today we began to upgrade our ESX 3.5 machines to VMware vSphere (esx 4.0) We normally have 3 ESX hosts, one has been offline and retasked to testing our new SAN hardware. We have enough capacity on our esx2 machine to house our entire collection of VM’s under nominal load. So, I sent esx1 into maintenance mode, and upgraded it to vSphere.  The install went perfectly, and i got the networking all set up and joined it to our cluster.

Everything seemed good as i started to migrate machines over to esx1, so i could do the same upgrade on esx2… then everything went horribly wrong.

our VMFS is a 4TB lun, in ESX 3.5 land the command vdf reports 1.5TB available. in vSphere land it reports 8GB!!!!!

When we moved machines over to run on vSphere their swapfiles were causing problems and vSphere thinks it doesn’t have enough space to run VM’s. Luckly before everything went down i was able to get esx2 out of maintenance mode, and migrate all the VM’s back to esx2.

Right now our vSphere esx1 is dead in the water until i can figure out why it thinks 1.5TB  = 8GB

Unless i can come up with the answer tonight i believe a call to VMware tech support will be in order for the morning.

 

UPDATE:  We upgraded another esx node to vSphere and it sees the old VMFS disk the same way. I think its a LVM problem, either with our RAID hardware or something corrupt on the old VMFS volume. We’ve decided to create a new VMFS volume on our EMC AX4 and call it a day. I really HATE sidestepping problems like this. I want to know WHY its broken like this. Grrr.

Leave a Reply