Originally Posted by
fichte-fox
..., so I'm assuming I've only given it two actual cores (i.e. 4 threads).
Virtualization CPU allocation doesn't actually work that way by default. I've only seen specific CPUs pinned to a specific VM in extremely low-latency, real-time, VM deployments. I have doubts that a toy hypervisor supports it.
Originally Posted by
fichte-fox
Thing is, Intel 12-series is using the new P-Core and E-Core mixed-core architecture, so I have no idea how the VM and linux are handling that. There should only be 16 threads (P-Cores are double-threaded, E-Cores are single-threaded), but it seems like it's recognising two threads for each E-Core, too. I wonder whether it's given it two E-Cores, two P-Cores, or some combination. Who knows? Haha
Don't assume that virtualbox knows anything about the CPUs, beyond which instructions they all support.
virtio is a device layer. It has nothing to do with network architecture. You can read more about how and what virtualbox supports in the online manual, chapter 6. This comes up so often here, that I've memorized the chapter, though I stopped using virtualbox about a decade ago for an native linux enterprise hypervisor.
I suspect the real issue with the slow terminal is more about font handling between X11 and Windows, but I don't really know. Running a full VM with a GUI just to have a terminal seems noobish. If all you need is a terminal, run a server installation and ssh in. People have been remotely managing systems over this type of connection for 25 yrs around the world. I've used it from 5 continents myself. GUIs slow everything down and should be avoided if performance is your need.
One more idea. Run the VM full screen at the native resolution. I'd be surprised if the GUI performance didn't drastically improve.
now am using KVM/QEMU which is much smoother and faster in my experience.
I switched to KVM/qemu with libvirt initially around 2010, but the migration off Xen and virtualbox took about 18 months for all my VMs. Alas, if the host is Windows, there aren't many good choices for a hypervisor that are free. Of the free choices, vbox is probably the best. If you have $200, VMware Workstation is very nice, but beware that it delays support for the latest hardware and OS releases for months. During those delays, either odd behavior or it doesn't work are the usual outcomes.
For my 20 VMs, all under kvm these days, I default for Linux servers to 512MB of RAM and 1 vCPU. A really heavy VM used 2 vCPUs and 4G of RAM.
For my desktop (20.04 server with fvwm as the window-manager) which also runs inside a KVM VM, I provide 1 vCPU and 4G of RAM. I find this the right mix of light, usable, and lacking of most bloat that all the DEs bring. I'd never recommend fvwm to someone with less than a year Linux experience and probably only someone either using it on other Unix platforms or who has been running Linux at least 5 yrs would be interested. LXQt and XFCE and the Mate DEs are each lighter than KDE or Gnome by a sufficient margin to be noticeable.
To see what any Linux system get for RAM+swap, use free -hm in a terminal. To learn about the CPU(s), the lscpu command.
For example:
Code:
$ free -hm
total used free shared buff/cache available
Mem: 3.9G 3.0G 359M 828K 485M 603M
Swap: 979M 152M 827M
$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 2
NUMA node(s): 1
Vendor ID: AuthenticAMD
...
There are other tools/command that can capture lots of hardware and driver information. inxi is one. lshw is another. Neither of these are pre-installed, so I always install both packages on my systems BEFORE they are needed for troubleshooting any issues. For example, the network summary:
Code:
$ inxi -N
Network:
Device-1: Intel 82371AB/EB/MB PIIX4 ACPI type: network bridge
driver: piix4_smbus
Device-2: Red Hat Virtio network driver: virtio-pci
See the virtio device and driver?
There are built-in commands to get information, but getting exactly what you'd like to see is less intuitive. An example command with output:
Code:
$ lspci -vk |perl -lne 'print if /Ethernet|Network/ .. /^[\w]*$/'
03:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03)
Subsystem: ASUSTeK Computer Inc. I211 Gigabit Network Connection
Flags: bus master, fast devsel, latency 0, IRQ 31
Memory at fc800000 (32-bit, non-prefetchable) [size=128K]
I/O ports at d000 [size=32]
Memory at fc820000 (32-bit, non-prefetchable) [size=16K]
Capabilities: <access denied>
Kernel driver in use: igb
Kernel modules: igb
07:00.0 Ethernet controller: Intel Corporation 82575GB Gigabit Network Connection (rev 02)
Subsystem: Intel Corporation Gigabit VT Quad Port Server Adapter
Flags: bus master, fast devsel, latency 0, IRQ 29
Memory at fc620000 (32-bit, non-prefetchable) [size=128K]
Memory at fc400000 (32-bit, non-prefetchable) [size=2M]
I/O ports at c020 [size=32]
Memory at fc644000 (32-bit, non-prefetchable) [size=16K]
Capabilities: <access denied>
Kernel driver in use: igb
Kernel modules: igb
...
There are 3 more NICs in that box, so the output continues.
Hope that seeing a few commands helps a little. Of course, what is useful depends on your goals. Admins need to know different things than developers, though there is some overlap.
Bookmarks