====== ESXi - CLI ======
===== Manually start 'Auto power on' procedure =====
This may be useful after exiting maintenance mode. The following command will start the VM's like described in /etc/vmware/hostd/vmAutoStart.xml.
~ # vim-cmd hostsvc/autostartmanager/autostart
The execution takes some time. It will start one VM after another.
===== Shutdown all powered on VM's =====
This command will get all running VM's, determine the Vmid and initiate a guest shutdown for all of them. Use it at your own risk.
~ # for vmid in $(vim-cmd vmsvc/getallvms | egrep "$(esxcli vm process list | grep "^[^ ]" | xargs | sed 's/ /|/g')" | awk '{ if($1 ~ /^[0-9]*$/ ) { print $1} }'); do vim-cmd vmsvc/power.shutdown $vmid; done
===== Poweroff or kill a VM =====
**Attention**: This is a power off, not a guest shutdown. This is not the prefered way to shutdown your VM's.
First, to get a list of your running VM's and the corresponding //World ID// execute the following command:
~ # esxcli vm process list
my_vm_x
World ID: 35930
Process ID: 0
VMX Cartel ID: 35699
UUID: 56 4d 67 bf 2c 52 e1 e0-33 1b 57 42 02 d8 d8 3b
Display Name: my_vm_x
Config File: /vmfs/volumes/5463f5fa-e9e1b35a-983d-6451066626e8/my_vm_x/my_vm_x.vmx
my_vm_y
World ID: 36398
Process ID: 0
VMX Cartel ID: 36397
UUID: 56 4d f9 f5 10 4e fa e0-42 e8 80 17 a2 bf b5 93
Display Name: my_vm_y
Config File: /vmfs/volumes/5463f5fa-e9e1b35a-983d-6451066626e8/my_vm_y/my_vm_y.vmx
This is how the kill command is constructed:
~ # esxcli vm process kill --type={soft,hard,force} --world-id={world id}
Use the type in this order. Try a soft kill first. If the VM wont power off try hard and if it doesn't work try force as last option.
For example:
~ # esxcli vm process kill --type=soft --world-id=35930
and if soft is not enough:
~ # esxcli vm process kill --type=hard --world-id=35930
~ # esxcli vm process kill --type=force --world-id=35930
===== Create logical RAID 1 drive with two new disk =====
//Environment: HP Proliant MicroServer gen8 with ESXi 6.0.0 HP customized ISO//
First, check if the system detected the new disks.
~ # esxcli hpssacli cmd -q "ctrl slot=0 pd all show"
Dynamic Smart Array B120i RAID in Slot 0 (Embedded)
array A
physicaldrive 3I:0:3 (port 3I:box 0:bay 3, SATA, 1500.3 GB, OK)
physicaldrive 4I:0:4 (port 4I:box 0:bay 4, SATA, 1500.3 GB, OK)
unassigned
physicaldrive 1I:0:1 (port 1I:box 0:bay 1, SATA, 3 TB, OK)
physicaldrive 2I:0:2 (port 2I:box 0:bay 2, SATA, 3 TB, OK)
In this example, there are two unassigned disks (1I:0:1,2I:0:2). With the following command, a new logical disk (array B) is created with this two disks in a RAID 1 mirror.
~ # esxcli hpssacli cmd -q "ctrl slot=0 create type=ld drives=1I:0:1,2I:0:2 raid=1"
The command above can be shortened because the two used disks are the only unassigned disks. The keyword **all** can be used instead of a list with the disks.
~ # esxcli hpssacli cmd -q "ctrl slot=0 create type=ld drives=all raid=1"
Now, the result can be checked.
~ # esxcli hpssacli cmd -q "ctrl slot=0 pd all show"
Dynamic Smart Array B120i RAID in Slot 0 (Embedded)
array A
physicaldrive 3I:0:3 (port 3I:box 0:bay 3, SATA, 1500.3 GB, OK)
physicaldrive 4I:0:4 (port 4I:box 0:bay 4, SATA, 1500.3 GB, OK)
array B
physicaldrive 1I:0:1 (port 1I:box 0:bay 1, SATA, 3 TB, OK)
physicaldrive 2I:0:2 (port 2I:box 0:bay 2, SATA, 3 TB, OK)
~ # esxcli hpssacli cmd -q "ctrl slot=0 ld all show"
Dynamic Smart Array B120i RAID in Slot 0 (Embedded)
array A
logicaldrive 1 (1.4 TB, RAID 1, OK)
array B
logicaldrive 2 (2.7 TB, RAID 1, OK)
===== Label new volume =====
I added two disks to my ESXi which were used in my NAS before. I created a new array on the controller, started a rescan and tried to add new storage. The volume was listed but when I got ahead I got the following error:
Call "HostDatastoreSystem.QueryVmfsDatastoreCreateOptions" for object "ha-datastoresystem" on ESXi "my_host" failed.
It seems that the old partition table of my NAS annoys my ESXi host.
Solution: relabel the device.
The disk name is visible in the GUI.
~ # partedUtil mklabel /vmfs/devices/disks/naa.600508b1001c73ca6d5fa78a068685e5 gpt