Tiberius Suspended 7607 Posts user info edit post |
this is really starting to irritate me, for some reason when a vmware guest tries to communicate with the host, the network switch lights up on the port that the interface bridged to the virtual network is on (and no other port)
AND throughput seems to be consequently limited to 100mbit
1. why is vmware traffic physically traversing the network when the host should exist on the virtual network
2. why is the traffic bounded by the physical network 12/11/2008 5:24:10 PM |
Tiberius Suspended 7607 Posts user info edit post |
ho-hum, it looks like the network driver being compiled in rather than compiled as a module may have been preventing the vmnet driver from being utilized
STAND BY FOR FURTHER UPDATES 12/11/2008 6:08:57 PM |
Tiberius Suspended 7607 Posts user info edit post |
LOL
so a host-only network with the vmnet driver properly utilized is still limited to (less than) 100mbit
AND a host-only network using a e1000 virtual device with the e1000 driver still seems to be limited to (less than) 100mbit, but a bit faster than vmnet
CPU usage of the virtual machine is reported around 100% in the guest regardless of the driver, and 100% of the associated CPU in the host, so it appears that the reason e1000 is marginally faster is because of it's lower CPU utilization
which begs the question: why in the fuck is host-to-guest network I/O so CPU intensive and what can be done 12/11/2008 7:30:45 PM |
Tiberius Suspended 7607 Posts user info edit post |
after further tuning I am now at ~110Mbps avg with e1000 driver and e1000 virtual device
considering this VMware host sustains 100MBps over gigabit to other physical hosts, I am pretty god damned disappointed in VMware right now
this is with VMware Server non-ESX, I just updated the install, tuned some random host, guest, and VMware parameters 12/11/2008 8:50:49 PM |
ScHpEnXeL Suspended 32613 Posts user info edit post |
what are you doing that needs that much bandwidth? 12/11/2008 9:01:42 PM |
Tiberius Suspended 7607 Posts user info edit post |
nothing in particular, just tired of having to hop over to the host every time I realize it's going to take 10x as long for a large file operation to complete
my frustration today has been that extracting a DVD image shouldn't take 15 minutes when I know the host can do it in one minute locally
I strongly suspect that jumbo frames would mostly resolve the issue, but thus far neither bridged nor host-only networking seem to actually pass jumbo frames to the host or network, though the e1000 driver allows the mtu to be set guest-side 12/11/2008 9:17:37 PM |
evan All American 27701 Posts user info edit post |
this doesn't happen in ESX
no matter what you do, however, you're always going to have to traverse the network if you're going VM to host - the host has no connection to the vswitch.
it's CPU intensive because the CPU is having to both emulate a NIC for the VM and process the traffic coming into the host (if you don't have a TOE) as well as its other CPU duties - that sucks up quite a few clock cycles
what sort of box are you running GSX on? your network throughput is going to be limited to the CPU resources that vmware makes available for NIC virtualization. if all of your clock time is being sucked up by the VMs themselves, there's not much left for the NICs.] 12/11/2008 10:15:46 PM |
Tiberius Suspended 7607 Posts user info edit post |
it's a dual socket A system with XP2500+ processors, not exactly monster fast but not in my wildest dreams would I have imagined virtualization overhead would limit it to ~15MB/sec
I'm pretty sure host-only networking doesn't traverse the network, I set up an additional network for the host and VMs and do not see it generating any physical traffic
aside from supporting jumbo frames, what's different in ESX? 12/12/2008 7:49:40 AM |
smoothcrim Universal Magnetic! 18966 Posts user info edit post |
Quote : | "the host has no connection to the vswitch." |
not true. the host OS in server can see all the virtual nics. the fact is that vmware server's implementation of bridged networking is a hub rather than a switch so both OS's see the full traffic of the guest OS.12/12/2008 7:57:14 AM |
evan All American 27701 Posts user info edit post |
^that's not how it works in ESX, the host OS can only see the vmkernel ports on a switch - maybe things are different for GSX 12/12/2008 11:35:02 AM |
Aficionado Suspended 22518 Posts user info edit post |
anyone know how to get the /dev/vmnet# up and running in vmware workstation 6.5.1
my error is: Could not open /dev/vmnet0: No such file or directory Failed to connect virtual device Ethernet0.
rhel 5.2 host
there is no vmware-config.pl in this version
ifconfig only shows lo and eth0, none of the vnets
oh and when i click on the virtual network manager in vmware, i get nothing
[Edited on December 19, 2008 at 11:06 AM. Reason :
12/19/2008 11:03:38 AM |
Tiberius Suspended 7607 Posts user info edit post |
there should be a vmware-config.pl
I would rename /etc/vmware and do a reinstall, particularly if this was an upgrade, 'cause it sounds like that install is corrupt 12/19/2008 11:33:32 AM |
Aficionado Suspended 22518 Posts user info edit post |
fresh install, no upgrade, no vmware-config.pl 12/19/2008 12:01:54 PM |
Tiberius Suspended 7607 Posts user info edit post |
huh, it looks like 6.5.1 may actually not have a vmware-config.pl, but you need to run the vmware binary as root initially, and possibly the following:
vmware-modconfig --console --install-all 12/19/2008 1:12:55 PM |
Aficionado Suspended 22518 Posts user info edit post |
yeah, i did that, and it wont start networking services 12/19/2008 8:22:44 PM |
Aficionado Suspended 22518 Posts user info edit post |
Quote : | "Starting VMware services: Virtual machine monitor [ OK ] Virtual machine communication interface [ OK ] Blocking file system [ OK ] Virtual ethernet [FAILED] Unable to start services " |
fuck12/19/2008 9:02:00 PM |
Tiberius Suspended 7607 Posts user info edit post |
is it loading the vmnet module, is it creating /dev/vmnet* device nodes?
if it's loading the modules but not creating the nodes:
cd /dev for ((i=0;i<10;i++)); do mknod vmnet$i c 119 $i; done after creating the nodes, or if they already existed, you may be able to bring the bridge up with something resembling the following:
vmnet-bridge -d /var/run/vmnet-bridge-0.pid /dev/vmnet0 eth0
[Edited on December 19, 2008 at 10:43 PM. Reason : .]12/19/2008 10:38:36 PM |
Aficionado Suspended 22518 Posts user info edit post |
no, it hasnt been creating any of the /dev/vmnet# even after restarting
i have been using the MAKEDEV vmnet command to create them with no luck
ill try that other command later
thx 12/19/2008 10:45:38 PM |
Aficionado Suspended 22518 Posts user info edit post |
worked
thx 12/22/2008 12:20:10 PM |
Tiberius Suspended 7607 Posts user info edit post |
lol hacktacular, glad it worked
I am pretty sure it won't save the configuration that way, the init.d script is supposed to parse the config, create the virtual device nodes, and set up NAT / bind the bridges where appropriate in basically the same way
I'd guess the net config is blank or unreadable and that's why it's erroring at the network startup, possibly due to permission issues in /etc/vmware
I have never used 6.5.1, though, and this "no vmware-config.pl" begs the questions "well then where do you configure virtual networks?" 12/22/2008 3:56:22 PM |