name: OpenVZ presentation layout: true class: backg, middle --- class: center, middle, main-title # OpenVZ .footnote[ Rémy Dernat, MBB platform, ISE-M / UMR5554. CNRS ] --- .left-column[ --- ## Container -like ecosystem ] .right-column[ - Software designed container: rkt .red[*], Docker - Full stack container: LXC, .blue[OpenVZ] - Backend: runC, singularity - Management: systemd-nspawn/machinectl, Proxmox, LXD, libvirt... - libraries: libct, libcontainer - kernel features (isolation) : namespace, userspace, cgroups ] .footnote[.red[*] rkt-vs-others: https://coreos.com/rkt/docs/latest/rkt-vs-other-projects.html ] --- .left-column[ --- ## OpenVZ a bit of history ] .right-column[ - 1999 .red[*]: Project started as some piece of code in linux kernel - 2001/2002 .green[*]: Virtuozzo product - 2005/2006: OpenVZ first release as GPL code; core of Virtuozzo - 2012 .purple[*]: Adding CRIU features (checkpoint/restore) ] .footnote[ .red[ *] OpenVZ History: https://openvz.org/History .green[ *] Virtuozzo: https://en.wikipedia.org/wiki/Parallels_(company)#Server_software .purple[ *] CRIU: http://criu.org/Main_Page ] --- .left-column[ --- ## OpenVZ features ] .right-column[ - Live migration - No resource usage on host if container is not used contrary to pure VM technologies - Acccording to wikipedia OpenVZ can run 320 VE (VZ container) on a host with 2GB RAM - Container runs as root on host. It is not daemonized (only the event daemon). - Distros tar templates ] --- .left-column[ --- ## OpenVZ features ] .right-column[ - Containers datas in /var/lib/vz or /vz depending on install - Configuration in /etc/vz - container network through virtual network devices (veth...) or ethernet bridges (vmbr...). - Mounted file system with simfs (default) .red[*] ] .footnote[.red[*] simfs is not a real filesystem / used to isolate fs and mount from host. Other possibilities for container storage (zfs, lvm: https://openvz.org/CT_storage_backends) ] --- .left-column[ --- ## OpenVZ main commands ] .right-column[ - vz* : - vzlist (-a) - vzctl enter/destroy/exec/restart/... $CTID - vzcfgvalidate, vztop, vzmemcheck - vzquota, vzmigrate, vzrestore, vzdump - ... ] --- .left-column[ --- ## vzctl example ] .right-column[ ```bash vzctl create 101 --ostemplate centos-6-x86_64 vzctl set 101 -hostname toto.my.domain.org --ipadd 192.168.1.1 --userpasswd root:[secret] vzctl start 101 #vzctl mount 101 vzctl exec 101 echo toto vzctl enter 101 #lsb_release -a #ls #exit #vzctl umount 101 vzctl stop 101 vzctl destroy 101 ``` ] --- .left-column[ --- ## Some usage examples ] .right-column[ - Many containers share the host resources. When a container do nothing, it does not take any resources. - Useful for slightly used services (e.g.: low traffic Website, low density software forge, etc...) - OpenVZ is full stack; so if there are many dependencies between applications your full system could be easier to configure than software designed container. ] --- ## Usage example from our office - vzlist ```bash root@pxmaster:~# vzlist -a stat(/var/lib/vz/root/101): No such file or directory CTID NPROC STATUS IP_ADDR HOSTNAME 101 - mounted 162.38.181.63 trash.mbb.univ-montp2.fr 102 28 running 162.38.181.65 smallservices.mbb.univ-montp2.fr 103 - stopped 162.38.181.222 mbb-bis.mbb.univ-montp2.fr 104 - stopped 162.38.181.211 mbb-rescue.mbb.univ-montp2.fr 105 57 running 162.38.181.236 orthomam.mbb.univ-montp2.fr 106 126 running 162.38.181.151 testKhalid.mbb.univ-montp2.fr 107 123 running 162.38.181.155 gitlab.mbb.univ-montp2.fr 108 117 running 162.38.181.156 gitlabpriv.mbb.univ-montp2.fr 110 95 running 162.38.181.3 dblog.mbb.univ-montp2.fr 111 34 running 162.38.181.16 webcalc.mbb.univ-montp2.fr 112 58 running 162.38.181.158 astribot.mbb.univ-montp2.fr 113 45 running 162.38.181.159 isemsite.mbb.univ-montp2.fr 114 41 running 162.38.181.31 wgalaxy-rescue.mbb.univ-montp2.fr 116 46 running 162.38.181.200 arbredelavie.mbb.univ-montp2.fr ``` --- .left-column[ --- ## Some usage examples ] .right-column[ - One big container take almost all of host resources. -> Why ? - Host management from host by admin that can dump/destroy the container or monitor processes running on it - Can be used for intensive applications/usage. At our office, we made scheduling process for powerful workstation based on OpenVZ ((almost) no performance loss within a container). ] --- .left-column[ --- ## Some issues with OpenVZ ] .right-column[ - Manual network configuration could be painful (bridge / veth), - Linux kernel patched for OpenVZ, - Usually container mounts filesystems from the host *via* simfs - nfs need a special feature enabled on container .red[*], - Device creation from the container could be a problem for many usage, including mounting some filesystems with fuse .green[*] ] .footnote[ .red[ *] https://openvz.org/NFS .green[ *] https://openvz.org/FUSE ] --- .left-column[ --- ## Some issues with OpenVZ ] .right-column[ - OpenVZ runs as root user - Can lead to security issues or a Kernel Panic on host side when a miscoded application runs on the container and does memory/buffer overflow. - A possible explanation of Kernel Panic : OOM (Out of Memory) killer on host side will lower the priority/emergency of killing the container because it runs as root; So it is not killed in time (could have changed in the last kernel release (4.6).red[*]). ] .footnote[.red[*] kernel release note: http://kernelnewbies.org/Linux_4.6#head-e876eaf1c8288d0bb28744e5319a833e9f031538 ] --- ## Proxmox --- name: proxmox cluster accueil background-image: url(proxmox_accueil.png) --- name: proxmox basic monitor overview background-image: url(proxmox_monitor_basic.png) --- .left-column[ --- ## Proxmox < v4 ] .right-column[ - Easy to manage and monitor - Since v4: replaced by LXC. Can migrate easily .red[*] ] .footnote[.red[*] https://pve.proxmox.com/wiki/Convert_OpenVZ_to_LXC ]