Current VirtualBox recommendations are to use the virtual Intel network cards for guest machines and to configure for bridged networking. Until now, the only choice for OS/2 was the older, IBM-supplied, Intel E1000 driver. The result? Performance just slightly better than the default AMD PCnet-FAST III virtual adapter.
Now, however, there is a choice. Arca Noae subscribers may use the all-new MultiMac Legacy EM driver (MMLEM). This driver is a breakthrough for virtual machines running under VirtualBox, with performance measured at more than twice the throughput of the older driver.
Some comparisons from netio 1.3 across a 1Gbps unmanaged switch, from an OS/2 VM running the latest 32-bit TCP/IP stack to a 64-bit Linux server running on bare metal1:
E1000:
TCP connection established. Packet size 1k bytes: 15.04 MByte/s Tx, 9168.71 KByte/s Rx. Packet size 2k bytes: 19.64 MByte/s Tx, 11.99 MByte/s Rx. Packet size 4k bytes: 22.38 MByte/s Tx, 13.58 MByte/s Rx. Packet size 8k bytes: 23.72 MByte/s Tx, 17.62 MByte/s Rx. Packet size 16k bytes: 24.83 MByte/s Tx, 20.62 MByte/s Rx. Packet size 32k bytes: 19.52 MByte/s Tx, 17.82 MByte/s Rx. Done.
MMLEM:
TCP connection established. Packet size 1k bytes: 13.19 MByte/s Tx, 9183.80 KByte/s Rx. Packet size 2k bytes: 18.65 MByte/s Tx, 12.20 MByte/s Rx. Packet size 4k bytes: 27.93 MByte/s Tx, 14.98 MByte/s Rx. Packet size 8k bytes: 39.91 MByte/s Tx, 19.29 MByte/s Rx. Packet size 16k bytes: 50.39 MByte/s Tx, 22.74 MByte/s Rx. Packet size 32k bytes: 28.07 MByte/s Tx, 19.19 MByte/s Rx. Done.
(Note that the falloff between 16 and 32k appears to be an issue within VirtualBox itself, as the same tests, when run against the host machine, actually report an improvement in throughput for the 32k packet size over the 16k one. A 32-bit Linux guest does not show this falloff.)
As you can see, peak transmit throughput, using 16k byte packets, went from 24.83MByte/s (198.64Mbps) to 50.39MByte/s (403.12Mbps). If you are transferring large files across your network to and from your OS/2 VM, this implies a possible reduction in the amount of time it takes for such transfers by more than one half2.
In addition, while the above tests were run using the Intel PRO/1000 MT Desktop (82540EM) virtual network card in the guest, the MMLEM driver also supports the Intel PRO/1000 T Server (82543GC) and Intel PRO/1000 MT Server (82545EM) virtual network card options available in VirtualBox 5.x, either of which may yield even better throughput (the older IBM-supplied driver does not support these server-class cards).
There are other benefits of the Arca Noae Drivers & Software subscription for virtualized users of OS/2, including full shut down and virtual power off of the VM when using Arca Noae’s ACPI PSD. So if you thought there wasn’t much value in subscribing just to run virtual machines, you might want to look again.
- Guest machine running eCS 2.1, configured with 2GB RAM, Intel PRO/1000 MT Desktop (82540EM) virtual network card, 32-bit TCP/IP stack, default sockets. Host machine running openSUSE LEAP 42.1 x64, 16GB RAM, single Intel 82567LF-2 onboard network adapter, and default adapter settings. NETIO target (host) machine running openSUSE 13.2 x64, 32GB RAM, dual Broadcom NetXtreme II BCM5708 onboard network adapters, in bonded active backup configuration, with default adapter settings for the physical bond slaves. Switch was Cisco SR2024 (unmanaged 10/100/1000).
- Many factors contribute to overall network throughput, including protocol, aggregate traffic, CPU activity, etc. These figures are meant as a guideline and not a guaranty of performance.