Hello,

I'm running Ubuntu 14.04 with KVM and Libvirt, and I'm attempting to get memory ballooning to work.

Initially, I had trouble with the program I'm using to deploy vms (It's home grown, but essentially, it creates libvirt XML configs to deploy vms), and I was setting <currentMemory> to 1. As the VM was booting I could watch the currentMemory increasing and decreasing via Virt-Manager even though the VM would kernel panic. Once I got that issue solved I'm setting currentMemory to 1024MB and memory to 4096MB and the VM boots with 1024MB of RAM, but it never acquires more RAM as memory pressure is increased (in fact the kernel kills the process when it tries to use too much memory, stress also fails to allocate the appropriate memory).

Note: I've intentionally deployed these vms without swap with the intent I would use memory ballooning in lieu of swap. Since I'm using Ceph on the backend for storage, "swap space" is really network storage which is less than ideal.

What do I need to do from the guest side to get memory ballooning to work? the manpage for virtio_balloon is from freebsd and I'm not sure that it applies... according to this forum post it seems like I should just be able to set currentMemory and memory to different settings and set the memballoon model to be able to have memory ballooning work (my xml config doesn't define the address, it's autogenerated at runtime it seems)

Here's the nitty gritty, I'm doing a "watch -n .5 free -m" while running these tests:

stress command to test memory usage (interestingly the same command fails and I have to modify the multiplier .9 to .88/ the .9 multiplier fails intermittently) found here:

Code:
root@trusty64:/lib/modules/3.13.0-24-generic# stress --vm-bytes $(awk '/MemFree/{printf "%d\n", $2 * .9 }' < /proc/meminfo)k --vm-keep -m 1 
stress: info: [5477] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
^C
root@trusty64:/lib/modules/3.13.0-24-generic# stress --vm-bytes $(awk '/MemFree/{printf "%d\n", $2 * .9 }' < /proc/meminfo)k --vm-keep -m 1 
stress: info: [5493] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: FAIL: [5494] (495) hogvm malloc failed: Cannot allocate memory
stress: FAIL: [5493] (395) <-- worker 5494 returned error 1
stress: WARN: [5493] (397) now reaping child worker processes
stress: FAIL: [5493] (452) failed run completed in 0s
Perl command to stress memory, second one is just to allow "watch" to keep up (e.g. it artificially slows down the loop)

Code:
perl -E 'while (1) { $foo .= "X" x 8 }'
perl -E 'while (1) { $foo .= "X" x 8; select undef, undef, undef, .0000000001 }'

root@trusty64:~# perl -E 'while (1) { $foo .= "X" x 8 }'
Out of memory!
root@trusty64:~# perl -E 'while (1) { $foo .= "X" x 8; select undef, undef, undef, .0000000001 }'
Out of memory!
Here's my VM config and template:

vm command line:

I think the important bits are (interestingly, I don't see 1024 specified, which is the minimum memory):

Code:
qemu-system-x86_64 -enable-kvm -m 4096 -realtime mlock=off -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x8

And here's the full line:
Code:
qemu-system-x86_64 -enable-kvm -name ubuntu-14.04-balloon -S -machine pc-i440fx-1.4,accel=kvm,usb=off -m 4096 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 
-uuid 0cb8db1c-44f5-4093-b2d2-0fc957a38ed8 -no-user-config -nodefaults 
-chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/ubuntu-14.04-balloon.monitor,server,nowait 
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot menu=off,strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 
-device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive file=rbd:libvirt-pool/ubuntu-14.04-balloon-os:auth_supported=none:mon_host=192.168.0.35\:6789\;192.168.0.2\:6789\;192.168.0.40\:6789,if=none,id=drive-virtio-disk0 
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=rbd:libvirt-pool/ubuntu-14.04-balloon-storage:auth_supported=none:mon_host=192.168.0.35\:6789\;192.168.0.2\:6789\;192.168.0.40\:6789,if=none,id=drive-virtio-disk1 
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk1,id=virtio-disk1 -netdev tap,fd=31,id=hostnet0,vhost=on,vhostfd=32 
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:09:37:e7,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 
-vnc 127.0.0.1:3 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device intel-hda,id=sound0,bus=pci.0,addr=0x4 -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x8
Template (note: the second line is the evaluated variable):

Code:
<domain type='kvm'>

  <memory unit='KiB'>[% guest.memory %]</memory>
  <memory unit='KiB'>4194304</memory>

  <currentMemory unit='KiB'>[% guest.current_memory %]</currentMemory>
  <currentMemory unit='KiB'>1048576</currentMemory>

  <vcpu placement='static'>[% guest.vcpu %]</vcpu>
  <vcpu placement='static'>2</vcpu>


  <memtune>
    <min_guarantee unit='KiB'>[% guest.current_memory %]</min_guarantee>
    <min_guarantee unit='KiB'>1048576</min_guarantee>
  </memtune>
  ...
  <devices>
    ...
    <memballoon model='virtio'>
    </memballoon>


  </devices>
</domain>
And here is the config of the dumped from libvirt:

Code:
<domain type='kvm'>
  <memory unit='KiB'>4194304</memory>
  <currentMemory unit='KiB'>1048576</currentMemory>
  <memtune>
    <min_guarantee unit='KiB'>1048576</min_guarantee>
  </memtune>
  <vcpu placement='static'>2</vcpu>
  ...
  <devices>
    ...
    <memballoon model='virtio'>      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </memballoon>
  </devices>

</domain>


And this is the /proc/meminfo of the guest:

Code:
MemTotal:         902620 kB
MemFree:          783208 kB
Buffers:           17892 kB
Cached:             8712 kB
SwapCached:            0 kB
Active:            37712 kB
Inactive:          14492 kB
Active(anon):      25716 kB
Inactive(anon):      264 kB
Active(file):      11996 kB
Inactive(file):    14228 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:                24 kB
Writeback:             0 kB
AnonPages:         25608 kB
Mapped:             3028 kB
Shmem:               380 kB
Slab:              29360 kB
SReclaimable:      20096 kB
SUnreclaim:         9264 kB
KernelStack:         720 kB
PageTables:         2960 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:      451308 kB
Committed_AS:      72476 kB
VmallocTotal:   34359738367 kB
VmallocUsed:       14340 kB
VmallocChunk:   34359715068 kB
HardwareCorrupted:     0 kB
AnonHugePages:      2048 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:       45048 kB
DirectMap2M:     4149248 kB
I appreciate any advice.