LXC: Difference between revisions

From MK Wiki EN
Jump to navigation Jump to search
(+Nested containers)
(+lxcbr0 missing after installation)
Line 27: Line 27:


Solution: Either getting dnsmasq running or using another virtual interface. In the config-file of the container there is a setting lxc.network.link". virbr0 might be used instead.
Solution: Either getting dnsmasq running or using another virtual interface. In the config-file of the container there is a setting lxc.network.link". virbr0 might be used instead.
== lxcbr0 missing at all after installing LXC ==
Maybe /etc/default/lxc-net is missing.
<code>
# This file is auto-generated by lxc.postinst if it does not
# exist.  Customizations will not be overridden.
# Leave USE_LXC_BRIDGE as "true" if you want to use lxcbr0 for your
# containers.  Set to "false" if you'll use virbr0 or another existing
# bridge, or mavlan to your host's NIC.
USE_LXC_BRIDGE="true"
# If you change the LXC_BRIDGE to something other than lxcbr0, then
# you will also need to update your /etc/lxc/default.conf as well as the
# configuration (/var/lib/lxc/<container>/config) for any containers
# already created using the default config to reflect the new bridge
# name.
# If you have the dnsmasq daemon installed, you'll also have to update
# /etc/dnsmasq.d/lxc and restart the system wide dnsmasq daemon.
LXC_BRIDGE="lxcbr0"
LXC_ADDR="10.0.3.1"
LXC_NETMASK="255.255.255.0"
LXC_NETWORK="10.0.3.0/24"
LXC_DHCP_RANGE="10.0.3.2,10.0.3.254"
LXC_DHCP_MAX="253"
# Uncomment the next line if you'd like to use a conf-file for the lxcbr0
# dnsmasq.  For instance, you can use 'dhcp-host=mail1,10.0.3.100' to have
# container 'mail1' always get ip address 10.0.3.100.
#LXC_DHCP_CONFILE=/etc/lxc/dnsmasq.conf
# Uncomment the next line if you want lxcbr0's dnsmasq to resolve the .lxc
# domain.  You can then add "server=/lxc/10.0.3.1' (or your actual $LXC_ADDR)
# to your system dnsmasq configuration file (normally /etc/dnsmasq.conf,
# or /etc/NetworkManager/dnsmasq.d/lxc.conf on systems that use NetworkManager).
# Once these changes are made, restart the lxc-net and network-manager services.
# 'container1.lxc' will then resolve on your host.
#LXC_DOMAIN="lxc"
</code>


== lxcbr0 disappears ==
== lxcbr0 disappears ==

Revision as of 07:56, 20 October 2018

Directories

  • Container: /var/lib/lxc
  • Cache for system installations: /var/cache/lxc

Nested containers

To enable installation of LXC inside a container, it must have the following setting:

lxc.mount.auto = cgroup

Debian Container in Ubuntu until 14.10

Directly after creation of the container it cannot be started. The following error message is being displayed:

Failed to mount cgroup at /sys/fs/cgroup/systemd: Permission denied

Solution: Inserting a line in the config-file of the container (usually located in /var/lib/lxc/CONTAINER/config):

lxc.aa_profile = unconfined

Credits

Container starts without IP address

Probably dnsmasq does not listen on the virtual interface (default: lxcbr0).

Solution: Either getting dnsmasq running or using another virtual interface. In the config-file of the container there is a setting lxc.network.link". virbr0 might be used instead.

lxcbr0 missing at all after installing LXC

Maybe /etc/default/lxc-net is missing.

  1. This file is auto-generated by lxc.postinst if it does not
  2. exist. Customizations will not be overridden.
  3. Leave USE_LXC_BRIDGE as "true" if you want to use lxcbr0 for your
  4. containers. Set to "false" if you'll use virbr0 or another existing
  5. bridge, or mavlan to your host's NIC.

USE_LXC_BRIDGE="true"

  1. If you change the LXC_BRIDGE to something other than lxcbr0, then
  2. you will also need to update your /etc/lxc/default.conf as well as the
  3. configuration (/var/lib/lxc/<container>/config) for any containers
  4. already created using the default config to reflect the new bridge
  5. name.
  6. If you have the dnsmasq daemon installed, you'll also have to update
  7. /etc/dnsmasq.d/lxc and restart the system wide dnsmasq daemon.

LXC_BRIDGE="lxcbr0" LXC_ADDR="10.0.3.1" LXC_NETMASK="255.255.255.0" LXC_NETWORK="10.0.3.0/24" LXC_DHCP_RANGE="10.0.3.2,10.0.3.254" LXC_DHCP_MAX="253"

  1. Uncomment the next line if you'd like to use a conf-file for the lxcbr0
  2. dnsmasq. For instance, you can use 'dhcp-host=mail1,10.0.3.100' to have
  3. container 'mail1' always get ip address 10.0.3.100.
  4. LXC_DHCP_CONFILE=/etc/lxc/dnsmasq.conf
  1. Uncomment the next line if you want lxcbr0's dnsmasq to resolve the .lxc
  2. domain. You can then add "server=/lxc/10.0.3.1' (or your actual $LXC_ADDR)
  3. to your system dnsmasq configuration file (normally /etc/dnsmasq.conf,
  4. or /etc/NetworkManager/dnsmasq.d/lxc.conf on systems that use NetworkManager).
  5. Once these changes are made, restart the lxc-net and network-manager services.
  6. 'container1.lxc' will then resolve on your host.
  7. LXC_DOMAIN="lxc"

lxcbr0 disappears

There might be a conflict with an existing DNS server. I use bind9 and this helps:

service bind9 stop
service lxc-net restart
service bind9 start

Moving containers from one host to another

  • Stop the container: lxc-stop -n containername
  • In /var/lib/lxc, execute tar --numeric-owner -czf containername.tar.gz containername
  • Copy containername.tar.gz from one machine to another (for example, using scp, rsync, wget or curl, or mount a sshfs from the other site)
  • On the other machine move the file to /var/lib/lxc
  • In /var/lib/lxc, execute tar --numeric-owner -xzf containername.tar.gz
    • Optional: remove the tar file by executing rm containername.tar.gz
  • Start container using lxc-start -d -n containername
  • Enter container using lxc-attach -n containername -- bash
  • Verify that all services are running (for example, using netstat -tulpn or examing the outout of systemctl status (when using systemd) or ps axf)

Troubleshooting

This should not be necessary if "--numeric-owner" has been supplied to the tar command above.

Example: You had mysql installed, but after moving, the service refuses to start.

In the original container, ps axf | grep mysql might print a line like this:

 1531 ?        Sl     0:40  \_ /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/run/mysqld/mysqld.sock --port=3306

Execute that command on the copied container. It might print something like this:

171112 17:50:01 [Warning] Using unique option prefix key_buffer instead of key_buffer_size is deprecated and will be removed in a future release. Please use the full name instead.
171112 17:50:01 [Note] /usr/sbin/mysqld (mysqld 5.5.57-0+deb7u1) starting as process 2140 ...
171112 17:50:01 [Warning] Using unique option prefix myisam-recover instead of myisam-recover-options is deprecated and will be removed in a future release. Please use the full name instead.
171112 17:50:01 [Note] Plugin 'FEDERATED' is disabled.
/usr/sbin/mysqld: Can't find file: './mysql/plugin.frm' (errno: 13)
171112 17:50:01 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it.
171112 17:50:01 InnoDB: The InnoDB memory heap is disabled
171112 17:50:01 InnoDB: Mutexes and rw_locks use GCC atomic builtins
171112 17:50:01 InnoDB: Compressed tables use zlib 1.2.7
171112 17:50:01 InnoDB: Using Linux native AIO
171112 17:50:01 InnoDB: Initializing buffer pool, size = 128.0M
171112 17:50:01 InnoDB: Completed initialization of buffer pool
171112 17:50:01  InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name ./ibdata1
InnoDB: File operation call: 'create'.
InnoDB: Cannot continue operation.

If you execute ls -lh ./var/lib/mysql/mysql/plugin.frm, an output similar to that one might appear:

-rw-rw---- 1 106 110 8,4K Aug 14 09:28 ./var/lib/mysql/mysql/plugin.frm
  • What's the problem? At the position that should show the owning user and owning group you see "106" and "110" instead.
  • How could that happen? The tar command was unable to correctly resolve the owner information. This is a common pitfall in Linux and other Unix-like systems: the user "mysql" can have another uid on every system (as id -u mysql would show). Furthermore, the system (the LXC host) that ran the tar command, did not have a user "mysql".

By "cd /", go to the file system's root directory and execute these commands:

find -uid 106 -exec chown mysql {} \;
find -gid 110 -exec chgrp mysql {} \;

If the above "ls" command printed other uid/gid values, replace as needed.

Links