Try this utterly simple Python program - on my machine it eats 8.8 mb RAM (reported by System Monitor, roughly one third of what it reports for Variety here).
I guess I'm old school, anything over 10 megs for a wallpaper changer seems crazy. (in truth anything over 5)
5-10 years ago 5 MB would have been a reasonable expectation using the modern technologies of that time. Not now. Reason is simple - RAM is cheap, everyone simply prefers using clean easily maintainable high-level code to highly optimized low-level code, so every tier in the application stack consumes more RAM than several years ago (VM, UI toolkit, application code, etc.). Things add up and now an application that displays one empty window consumes 9 mb RAM.
from gi.repository import Gtk
win = Gtk.Window()
A full-featured wallpaper changer can fit in 5 mb only if coded in low-level C, hand-crafted for a particular type of use-case, with no GUI, or using some very light non-standard UI toolkit. This won't be a general solution that people can install and use in a user-friendly manner.
Another experiment: try running the find/shuf command on a big directory tree (several thousand files) several times in a row. See how much time and HDD activity the first run takes and then how much faster the next runs are. This is because the OS caches the results. You cannot easily measure this memory consumption, but it is there, simply not allotted to a specific process. There is no escaping - you either have to trash the disk heavily every time, or you need to cache something and consume RAM. Speed vs memory usage is pretty much a "law of nature" - there is always need for compromising between them.