Page 1 of 13 12311 ... LastLast
Results 1 to 10 of 130

Thread: Lucid and OpenVZ and/or LXC ?

  1. #1
    Join Date
    Apr 2007
    Beans
    173

    Lucid and OpenVZ and/or LXC ?

    Hi there,

    I'm well into using OpenVZ on Hardy 8.04. I'm starting to think about the next LTS upgrade that will be Lucid 10.04.

    I just downloaded Alpha2 and tried OpenVZ: it isn't there! So I've been searching and it would seem that LXC (Linux Containers) is going to be in Lucid. I'd never heard of LXC before today.

    Does anyone know the full story? Will OpenVZ be in Lucid? Even if it is, should one plan on migrating to LXC? What will be the pain involved in that (given the investment in configured and running Hardy based OpenVZ VEs)

    As a day 1 plan, I had intended a host upgrade to Lucid leaving the many Hardy based OpenVZ VEs in place and I have (naively) hoped they would just continue working.

    Any thoughts/comments appreciated...

  2. #2
    Join Date
    Jun 2008
    Location
    Tampico,Mexico
    Beans
    1,395
    Distro
    Ubuntu 9.10 Karmic Koala

    Re: Lucid and OpenVZ and/or LXC ?

    I saw your post and googled LXC. On a short read it looks like this is for apps on the native system. I am still reading.

  3. #3

    Re: Lucid and OpenVZ and/or LXC ?

    I've talked with one Ubuntu-OpenVZ team member, and told me:

    • The plan for Lucid as agreed on with Canonical at the UDS is to have LXC in Ubuntu main and use it as a replacement for OpenVZ
    • LXC stands for Linux Containers, it's a similar technology but provided by the vanilla kernel instead of an heavily patched kernel like OpenVZ.
    • Kernel support is included in mainline since 2.6.26, so it has been a while. Userspace is still in development but works correctly and is integrated with libvirt.
    • A OpenVZ container itself should work as-is with LXC, only the configuration will need to be converted.
    • LXC doesn't need the VT (vmx) processor instruction set.
    • You can assign memory quota for both ram and swap and due memory overcommit without any issue.
    • Disk space isn't implemented yet, so you'd have to user regular quotas or use LVM for storage.


    Some references to use LXC:
    Narcis Garcia

  4. #4
    Join Date
    Apr 2007
    Beans
    173

    Re: Lucid and OpenVZ and/or LXC ?

    Thanks for the replies. So one question: If I want to continue using OpenVZ on Lucid, will I be able to ?

    I'd like the choice, and the ability to migrate over in my own timeframe

    I'm running 2.6.24-26 (Hardy) so I don't have the option of migrating in advance.

  5. #5

    Re: Lucid and OpenVZ and/or LXC ?

    With any Ubuntu version (younger or older than 8.04) you can recompile the kernel to use OpenVZ:
    http://wiki.openvz.org/Compiling_the...he_Debian_way)

    With Ubuntu GNU/Linux 8.04 we had an already compiled+patched kernel for OpenVZ, but you can do the same by yourself.
    Narcis Garcia

  6. #6
    Join Date
    Apr 2006
    Location
    Montana
    Beans
    Hidden!
    Distro
    Kubuntu Development Release

    Re: Lucid and OpenVZ and/or LXC ?

    Quote Originally Posted by starfry View Post
    Thanks for the replies. So one question: If I want to continue using OpenVZ on Lucid, will I be able to ?
    Almost certainly not.

    I use OpenVZ and would advise you migrate to Proxmox.

    The problem with OpenVZ is that it is a kernel patch, and there is no patch for more recent kernels.

    The "solution" is LXC. I have just started LXC and I will tell you

    1. It is not up to par with openvz yet. You will need to be willing to do a lot of reading to migrate.

    2. LXC still has bugs. I was unable to boot a Fedora-12 container I built for example.

    3. Much of LXC is not there yet, the containers are NOT as isolated as they should be and networking is manual, in order to get networking I had to manually add a bridge (no big deal, but still ... ), manually add an IP to the container, and manually add a route.

    4. Documentation for LXC is sketchy at best, it seems scattered and incomplete, for example I can not find a document that explains the config files and options in any detail, most of the stuff I have found seems to use the minimal options.

    5. Did I mention LXC containers are not completely isolated from the host ? Isolation seems better in a standard chroot then lxc. One has access to the host file system, I accessed the host swap with no problem, and one can start a process on the host from the container, all without "cracking" anything, simply running an init or upstart script or starting a server in the container can start a process on the host.

    6. I have had problems getting the user space tool set working, I need to look into this problem more.

    7. Although LXC is reported to work with libvirt, I could find no documentation in my brief search and libvirt seemed to have no clue about LXC.

    Now these problems may well be that I do not yet understand LXC, but IMO it is not yet up to par with OpenVZ.

    Moving forward, it appears once they work out the bugs and get libvirt up w/ LXC it will almost certainly replace OpenVZ.

    On the flip side, LXC are part of the main stream kernel so many more developers are working on LXC and forward compatibility will be much easier.

    You can start with LXC very easily, simply

    sudo apt-get install lxc

    This installs the tools / scripts, lxc is already enabled in the kernel, so installing the lxc package does not affect the host as much.

    I suggest playing with lcx in a VM or in a dedicated test host, and it certainly is not ready for deployment in a production environment.

    You may wist to look at some of these links:

    http://openvz.org/pipermail/devel/20...er/014314.html

    http://forum.openvz.org/index.php?t=msg&goto=38118&

    http://openvz.org/pipermail/users/20...ry/003189.html
    Last edited by bodhi.zazen; January 22nd, 2010 at 07:15 PM.
    There are two mistakes one can make along the road to truth...not going all the way, and not starting.
    --Prince Gautama Siddharta

    #ubuntuforums web interface

  7. #7
    Join Date
    Apr 2007
    Beans
    173

    Re: Lucid and OpenVZ and/or LXC ?

    This is a real shame. I've grown to really like OpenVZ. Proxmox isn't really an option for me here because my OpenVZ VE's are running on a desktop workstation box rather than a dedicated server box. OpenVZ has given me what I need and I am mostly happy with it.

    I am not afraid of recompiling kernels and I have done this many times. I've never done it on any Ubuntu box however because I wasn't sure how it would affect the ability to run other packaged software that depends on a specific kernel package.

    For ease of life I may have to go LXC if the Lucid team don't see sense and include OpenVZ in the next LTS release.

    I'm going to take your suggestion and evaluate LXC in a virtual machine.

  8. #8
    Join Date
    Oct 2007
    Beans
    34

    Re: Lucid and OpenVZ and/or LXC ?

    unfortunately, most of bodhi.zazen's post is simply not true.

    i do not use ubuntu anymore, and havent for some time, but the platform i have created should apply to ubuntu with some minor alterations. i work with arch linux, and within a week will be posting an extensive writeup on LXC w/BTRFS along with a git url to a full implementation of my platform. all you need to do is download it, add 2 lines to your rc.local, and thats it.**[1]

    i will try to remember to post back here after i write it, else my name on arch linux forums is extofme.

    i am using LXC in conjunction with BTRFS (awesome together) in a production environment, and can say that it is more than sufficient as a replacement for any chroot/openvz implementation. it is very flexible in the types of container it can construct, and with some conscious, secure as well. i use openvz on my other server and will be moving all my hosts to my LXC server, so i can make that server run LXC also.


    1. It is not up to par with openvz yet. You will need to be willing to do a lot of reading to migrate.
    it can do everything openvz can, along with freeze/unfreeze, and in the future checkpoint/restart. a little reading never hurt anyone either. openvz is deprecated, and lxc is the result of openvz devs and others sick of maintaining massive out of tree patches. openvz is likely to fall out of most distributions, as you've seen it's not in ubuntu, and has been dropped from arch linux as well.

    2. LXC still has bugs. I was unable to boot a Fedora-12 container I built for example.
    the userspace tools are still under development, but are more than sufficient and stable for use. kernel features were fully implemented in 2.6.29. your fedora problem is problem due to udev not working in a container and not having the appropriate device files in /dev or something along that line.

    3. Much of LXC is not there yet, the containers are NOT as isolated as they should be and networking is manual, in order to get networking I had to manually add a bridge (no big deal, but still ... ), manually add an IP to the container, and manually add a route.
    see above about fully implemented. when using the lxc.rootfs config option, pivot_root() syscall (vs. chroot()) is used... thats about as isolated as you can get. containers can share the network stack with the host, or have unlimited veth pairs created, one side in the container, and the other added to a bridge on the host. yes you have to pre create the bridges. in my system i have dedicated DHCP bridge, and another raw bridge so containers can have direct access to the lan (external IP) if need be. you dont have to specify an ip, the container init scripts should take care of this.

    4. Documentation for LXC is sketchy at best, it seems scattered and incomplete, for example I can not find a document that explains the config files and options in any detail, most of the stuff I have found seems to use the minimal options.
    http://manpages.ubuntu.com/manpages/...xc.conf.5.html

    thats what i use.

    5. Did I mention LXC containers are not completely isolated from the host ? Isolation seems better in a standard chroot then lxc. One has access to the host file system, I accessed the host swap with no problem, and one can start a process on the host from the container, all without "cracking" anything, simply running an init or upstart script or starting a server in the container can start a process on the host.
    see above about pivot_root(). if you dont use the lxc.rootfs config option, the rootfs is shared with the host. anything not specified in the config is shared with the host.

    6. I have had problems getting the user space tool set working, I need to look into this problem more.
    i use the lxc-* tools in my scripts, they work for me. i have used libvirt successfully as well, but you must mount the cgroup filesystem WITHOUT the ns option. see here for details:

    http://www.mail-archive.com/libvir-l.../msg18689.html

    havent tried since i sent that message to the list. i moved to the lxc-* tools for the flexibility; libvirt tries to impose to much on me for my liking.

    7. Although LXC is reported to work with libvirt, I could find no documentation in my brief search and libvirt seemed to have no clue about LXC.
    it does work but you must do what i wrote above. additionally i dont think "virsh console" works right; i only got it working after i wrote a custom /sbin/init in bash (virsh console seems to want to connect your tty to /sbin/init stdin/out/err, whereas lxc-console connects you to a pty that is linked to a tty in the container)

    Now these problems may well be that I do not yet understand LXC, but IMO it is not yet up to par with OpenVZ.
    not as mature, but very capable, and using a vanilla kernel is a must for me at least (using BTRFS)

    i have been working with this technology for several months and am very satisfied with my results. when combined with BTRFS, i can create several working containers in seconds, without consuming any physical space on the hard drive.

    **[1] if you have BTRFS and archlinux as the host of course, else there will be some minor changes needed. pretty much all references to btrfs in my scripts:

    btrfsctl -s ${VPS_DOM}/${dom} ${VPS_TPL}/${tpl}

    could be replaced with:

    cp -R ${VPS_TPL}/${tpl} ${VPS_DOM}/${dom}

    it's just stupidly slower and doesn't share blocks as the FS level like btrfs does. i intend to add this options eventually for those not using a btrfs based root filesystem. also mkarchroot is the equivalent to debbootstrab, and would need to be changed. i intend on add this kind of support as well by including static binaries of these kinds of bootstrap programs.

  9. #9
    Join Date
    Apr 2006
    Location
    Montana
    Beans
    Hidden!
    Distro
    Kubuntu Development Release

    Re: Lucid and OpenVZ and/or LXC ?

    I look forward to your write up then. I have been playing with LXC for the last few days and I can not for the life of me get an ubuntu lucid container (built with debootstrap) to boot properly.

    Your obviously know more about LXC then I do, I have been using the defaults.

    Neither lxc-debian or lxc-fedora function properly on Ubuntu 10.04.

    Also no need for the "attitude". For example I do not see pivot root mentioned anywhere in the man page you referenced

    http://manpages.ubuntu.com/manpages/...xc.conf.5.html

    except at the bottom as a link to the pivot_root page.

    Nor is that option listed on any of these pages:

    LXC: Linux container tools
    Linux Containers - ArchWiki
    LXC - openSUSE
    LXC HOWTO

    Would you mind providing a link to documentation of the pivot_root option ?

    So no need to talk down to us "mere mortals" just because you know a few advanced tricks to LXC. Along those lines, what I said in my post is accurate if you use the defaults for LXC. If you like I can post the exact output from my terminal.

    Here are some bug reports:

    https://bugs.launchpad.net/ubuntu/+s...xc/+bug/512200
    https://bugs.launchpad.net/ubuntu/+s...xc/+bug/471615

    Perhaps you know how to solve those ?

    Using the defaults for LXC, on both fedora and ubuntu, it is trivial to break out of the container, easier then breaking out of a traditional chroot jail.

    the only other document I am aware of that discusses security in any detail is here:

    http://www.ibm.com/developerworks/li...ity/index.html

    Again , if you have a better document, please share.

    I agree LXC is up and coming, but the project lacks documentation.

    If you would be so kind as to post documentation of your claims that would be useful to others migrating to LXC.

    It would also help if you were able to post some documentation on how to create a container. Most of the how -to's simpley state use chroot, lxc-{debian,fedora}, or an openvz template, but none of these techniques allows an ubuntu 10.40 (lucid) template to boot.
    Last edited by bodhi.zazen; January 28th, 2010 at 12:54 AM.
    There are two mistakes one can make along the road to truth...not going all the way, and not starting.
    --Prince Gautama Siddharta

    #ubuntuforums web interface

  10. #10
    Join Date
    Oct 2007
    Beans
    34

    Re: Lucid and OpenVZ and/or LXC ?

    whoa there big guy, attitude? i thought my post was fairly informative, and light of any snippiness... save maybe the first sentence and my comment about the ubuntu manpage . at any rate, i apologize because i see how it might be interpreted as being more hostile than i meant, as i meant none; i guess i just felt a slight tone/bias in your original post that twanged a chord with me, because i have been spending literally every day for many moons constructing the LXC based system i am using now. so again i apologize if i seemed to have an elitist attitude or something, i didn't intend that at all, i mean only to help, inform, and share.

    anyways lets get constructive. your right about the pivot_root() thing not really being mentioned. i believe the lxc-* tools originally used pivot root, but it was then switched to chroot temporarily for some reason, probably due to other problems. it is however back at using pivot_root() since this commit:

    http://lxc.git.sourceforge.net/git/g...f853f5392c2827

    but thats jan 08, a little before the release of 0.6.5, i guess i forgot to specify that i use an arch package that builds itself from git sorry bout that. the version in karmic/lynx probably still uses chroot().

    the first one has to do with the fact that there is no config file being defined in lxc-create (the -f option). because of that, the rootfs is being shared with the host, and this it is trying to use devices/init on the host, NOT in the folder that was just debootstrapped. there was actually a patch, because its an innocently disastrous problem, disallowing using lxc-start without a valid config option for just this reason... it can crash the host. see here:

    http://lxc.git.sourceforge.net/git/g...1d2b7a48889cf3

    there is a bug about the "Device or resource busy" part. i have seen that happen before, but only when i had partial cgroups mounted, some for libvirt and some for lxc with different subsystems enabled (like what i said about libvirt having a problem with th "ns" mount flag... which is enabled by default). see this thread:

    http://www.mail-archive.com/devel@op.../msg19729.html

    im not sure if thats been addressed or not, but it hasnt caused my problems since i moved to lxc-* tools exclusively. also if you mount a cgroup using the lxc name, lxc-* tools will use that cgroup. like this:

    mount -t cgroup lxc /cgroup

    from : http://lxc.sourceforge.net/lxc.html
    The control group can be mounted anywhere, eg: mount -t cgroup cgroup /cgroup. If you want to dedicate a specific cgroup mount point for lxc, that is to have different cgroups mounted at different places with different options but let lxc to use one location, you can bind the mount point with the lxc name, eg: mount -t cgroup lxc /cgroup4lxc or mount -t cgroup -ons,cpuset,freezer,devices lxc /cgroup4lxc
    the second one looks like it could have the same problem, as neither seem to be defining the rootfs via a config file. however, im not really sure, it could be because there are several missing kernel features. in my setup i have all namespaces enabled, and all cgroup features, as well as multiple devpts (this is HIGHLY recommended. without, all containers will see each others ptys including the hosts...) i even mount the hosts pty with the "newinstance" flag to force all instances of devpts to be private. there is a CONFIG kernel option to disable the "real" kernel devpts, and force all devpts mounts to be private. i would expect this to become the default soon, as you could use bind mount to share pty mounts instead. see here for more details, good read and not too long:

    http://www.mjmwired.net/kernel/Docum...ems/devpts.txt

    as for security, pivot_root() addresses the break out part, beyond that there are other things that need to be done. inside the lxc config file you specify:

    lxc.cgroup.devices.deny = a

    (requires the cgroup "devices" to be enabled) which will prevent the guest from creating any device files whatsoever. you then add things like in "marios" post here:

    http://blog.flameeyes.eu/2009/08/10/...nux-containers

    which explicitly allow certain device files to be created in the container, like random/urandom/etc. ill admit this is the part im working on now, i.e. locking down the containers, some i'm a little lacking in knowledge there.

    that also reminds me that lxc has no inherent support yet for controlling disk quota. in my setup, i use BTRFS subvolumes as rootfs's, so it is trivial for me to place limits on how large they can grow using the BTRFS tools. otherwise you'll probably need to use LVM or something to enforce disk quota. i think openvz did support this.

    the lxc-* tools config files can support any cgroup that exists, even those that havent been created. so you can place quota on memory, cpu, etc. as you see fit, im still working with this also.

    im pretty excited to share with everyone what i have learned and all the tools/scripts i've written; they represent a serious amount of work and they should help anyone looking to explore this technology get a jump start into how it works. you are right about the sparseness of docs, and many tutorials are out of date (including the one @ teegra.net which i started with) i will post a link once ive written it.

Page 1 of 13 12311 ... LastLast

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •