LXC

From MK Wiki EN
Revision as of 19:24, 13 September 2019 by MkWikiEnSysOp (talk | contribs) (+Installation)
Jump to navigation Jump to search

Installation

Ubuntu

apt install lxc lxc-templates

Directories

  • Container: /var/lib/lxc
  • Cache for system installations: /var/cache/lxc

Configuration

Nested containers

To enable installation of LXC inside a container, it must have the following setting:

lxc.mount.auto = cgroup

Mount file

In the container's config file, add or modify the line specifiing "lxc.mount":

lxc.mount = /var/lib/lxc/container/fstab

The fstab file might look like this:

/host/dir/	/var/lib/lxc/container/rootfs/dir/in/container        none   bind,create=dir

Common problems

Debian Container in Ubuntu until 14.10

Directly after creation of the container it cannot be started. The following error message is being displayed:

Failed to mount cgroup at /sys/fs/cgroup/systemd: Permission denied

Solution: Inserting a line in the config-file of the container (usually located in /var/lib/lxc/CONTAINER/config):

lxc.aa_profile = unconfined

Credits

Container starts without IP address

Probably dnsmasq does not listen on the virtual interface (default: lxcbr0).

Solution: Either getting dnsmasq running or using another virtual interface. In the config-file of the container there is a setting lxc.network.link". virbr0 might be used instead.

lxcbr0 missing at all after installing LXC

Maybe /etc/default/lxc-net is missing.

# This file is auto-generated by lxc.postinst if it does not
# exist.  Customizations will not be overridden.
# Leave USE_LXC_BRIDGE as "true" if you want to use lxcbr0 for your
# containers.  Set to "false" if you'll use virbr0 or another existing
# bridge, or mavlan to your host's NIC.
USE_LXC_BRIDGE="true"

# If you change the LXC_BRIDGE to something other than lxcbr0, then
# you will also need to update your /etc/lxc/default.conf as well as the
# configuration (/var/lib/lxc/<container>/config) for any containers
# already created using the default config to reflect the new bridge
# name.
# If you have the dnsmasq daemon installed, you'll also have to update
# /etc/dnsmasq.d/lxc and restart the system wide dnsmasq daemon.
LXC_BRIDGE="lxcbr0"
LXC_ADDR="10.0.3.1"
LXC_NETMASK="255.255.255.0"
LXC_NETWORK="10.0.3.0/24"
LXC_DHCP_RANGE="10.0.3.2,10.0.3.254"
LXC_DHCP_MAX="253"
# Uncomment the next line if you'd like to use a conf-file for the lxcbr0
# dnsmasq.  For instance, you can use 'dhcp-host=mail1,10.0.3.100' to have
# container 'mail1' always get ip address 10.0.3.100.
#LXC_DHCP_CONFILE=/etc/lxc/dnsmasq.conf

# Uncomment the next line if you want lxcbr0's dnsmasq to resolve the .lxc
# domain.  You can then add "server=/lxc/10.0.3.1' (or your actual $LXC_ADDR)
# to your system dnsmasq configuration file (normally /etc/dnsmasq.conf,
# or /etc/NetworkManager/dnsmasq.d/lxc.conf on systems that use NetworkManager).
# Once these changes are made, restart the lxc-net and network-manager services.
# 'container1.lxc' will then resolve on your host.
#LXC_DOMAIN="lxc"

lxcbr0 disappears

There might be a conflict with an existing DNS server. I use bind9 and this helps:

service bind9 stop
service lxc-net restart
service bind9 start

Tips and tricks

Moving containers from one host to another

  • Stop the container: lxc-stop -n containername
  • In /var/lib/lxc, execute tar --numeric-owner -czf containername.tar.gz containername
  • Copy containername.tar.gz from one machine to another (for example, using scp, rsync, wget or curl, or mount a sshfs from the other site)
  • On the other machine move the file to /var/lib/lxc
  • In /var/lib/lxc, execute tar --numeric-owner -xzf containername.tar.gz
    • Optional: remove the tar file by executing rm containername.tar.gz
  • Start container using lxc-start -d -n containername
  • Enter container using lxc-attach -n containername -- bash
  • Verify that all services are running (for example, using netstat -tulpn or examing the outout of systemctl status (when using systemd) or ps axf)

Troubleshooting

This should not be necessary if "--numeric-owner" has been supplied to the tar command above.

Example: You had mysql installed, but after moving, the service refuses to start.

In the original container, ps axf | grep mysql might print a line like this:

 1531 ?        Sl     0:40  \_ /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/run/mysqld/mysqld.sock --port=3306

Execute that command on the copied container. It might print something like this:

171112 17:50:01 [Warning] Using unique option prefix key_buffer instead of key_buffer_size is deprecated and will be removed in a future release. Please use the full name instead.
171112 17:50:01 [Note] /usr/sbin/mysqld (mysqld 5.5.57-0+deb7u1) starting as process 2140 ...
171112 17:50:01 [Warning] Using unique option prefix myisam-recover instead of myisam-recover-options is deprecated and will be removed in a future release. Please use the full name instead.
171112 17:50:01 [Note] Plugin 'FEDERATED' is disabled.
/usr/sbin/mysqld: Can't find file: './mysql/plugin.frm' (errno: 13)
171112 17:50:01 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it.
171112 17:50:01 InnoDB: The InnoDB memory heap is disabled
171112 17:50:01 InnoDB: Mutexes and rw_locks use GCC atomic builtins
171112 17:50:01 InnoDB: Compressed tables use zlib 1.2.7
171112 17:50:01 InnoDB: Using Linux native AIO
171112 17:50:01 InnoDB: Initializing buffer pool, size = 128.0M
171112 17:50:01 InnoDB: Completed initialization of buffer pool
171112 17:50:01  InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name ./ibdata1
InnoDB: File operation call: 'create'.
InnoDB: Cannot continue operation.

If you execute ls -lh ./var/lib/mysql/mysql/plugin.frm, an output similar to that one might appear:

-rw-rw---- 1 106 110 8,4K Aug 14 09:28 ./var/lib/mysql/mysql/plugin.frm
  • What's the problem? At the position that should show the owning user and owning group you see "106" and "110" instead.
  • How could that happen? The tar command was unable to correctly resolve the owner information. This is a common pitfall in Linux and other Unix-like systems: the user "mysql" can have another uid on every system (as id -u mysql would show). Furthermore, the system (the LXC host) that ran the tar command, did not have a user "mysql".

By "cd /", go to the file system's root directory and execute these commands:

find -uid 106 -exec chown mysql {} \;
find -gid 110 -exec chgrp mysql {} \;

If the above "ls" command printed other uid/gid values, replace as needed.

Links