Installing and running LXC
lxc package in Ubuntu
lxc package in Ubuntu
Save 20%
on your entire order using the code
PACKT20 upon checkout.
In this article by Konstantin Ivanov, the author of the book Containerization with LXC,
we will see how to install and run LXC. LXC takes advantage of the
kernel namespaces and cgroups to create process isolation we often refer
to as containers. As such LXC is not a separate software component in
the Linux kernel, but rather a set of userspace tools, the liblxc library and various language bindings.
In this article, we are going to cover the following topics:
We will explore most of the directories during building, starting and terminating of LXC containers.
LXC comes packaged with various templates for building root file systems for different Linux distributions. We can use them to create a variety of container flavors.
Let's start by building a container using the lxc-download template, which will ask for the distribution, release and architecture, then use the appropriate template to create the file system and configuration for us:
Let's see how the process tree looks like after starting the container:
Next, let's attach to the container, list all processes, network interfaces and check connectivity:
Let's examine the directory that was created after building the c1 container:
To demonstrate this, let's change the root password of the c1 container not by attaching to it, but by using chroot rootfs:
To test the root password, let's install SSH server in the container by first attaching to it and then using ssh to connect:
To demonstrate this, let's create a new container first:
To demonstrate this, let's create a new container and write a simple
script that will output the values of four LXC variables to a file,
during container start.
First, create the container and add the lxc.hook.pre-start option to its configuration file:
To demonstrate this, let's create a new container, directory and a file on the host:
Ensure you have a running container and check its state from the freezer cgroup:
Lets start by setting up the available memory to a container to 512MB:
Similarly we can pin a CPU core to container. In the next example our test server has two cores. Let's allow the container to only run on core 0:
To make changes persist server reboots we need to add them to the configuration file of the container:
PACKT20 upon checkout.
Installing and running LXC
Konstantin Ivanov
January 2017In this article, we are going to cover the following topics:
- Installing LXC on Ubuntu
- Building and starting containers using the provided templates and configuration files
- Showcase the various LXC operations
(For more resources related to this topic, see here.)
Installing LXC
At the time of writing there are two long-term support versions of LXC – 1.0 and 2.0. The userspace tools that they provide have some minor differences in command line flags and deprecations that I'll point out as we use them.Installing LXC on Ubuntu with apt
Let's start by installing LXC 1.0 on Ubuntu 14.04 Trusty:- Install the main LXC package, tooling and dependencies:
root@ubuntu:~# lsb_release -dc Description: Ubuntu 14.04.5 LTS Codename: trusty root@ubuntu:~# apt-get –y install -y lxc bridge-utils debootstrap libcap-dev cgroup-bin libpam-systemd bridge-utils root@ubuntu:~#
- The package versions that Trusty provides at this time is 1.0.8:
root@ubuntu:~# dpkg --list | grep lxc | awk '{print $2,$3}' liblxc1 1.0.8-0ubuntu0.3 lxc 1.0.8-0ubuntu0.3 lxc-templates 1.0.8-0ubuntu0.3 python3-lxc 1.0.8-0ubuntu0.3 root@ubuntu:~#
- Add the following two lines in the apt sources file:
root@ubuntu:~# vim /etc/apt/sources.list deb http://archive.ubuntu.com/ubuntu trusty-backports main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu trusty-backports main restricted universe multiverse
- Resynchronize the package index files from their sources:
root@ubuntu:~# apt-get update
- Install the main LXC package, tooling and dependencies:
root@ubuntu:~# apt-get –y install -y lxc=2.0.3-0ubuntu1~ubuntu14.04.1 lxc1=2.0.3-0ubuntu1~ubuntu14.04.1 liblxc1=2.0.3-0ubuntu1~ubuntu14.04.1 python3-lxc=2.0.3-0ubuntu1~ubuntu14.04.1 cgroup-lite=1.11~ubuntu14.04.2 lxc-templates=2.0.3-0ubuntu1~ubuntu14.04.1 bridge-utils root@ubuntu:~#
- Ensure the package versions are on the 2.x branch, in this case 2.0.3:
root@ubuntu:~# dpkg --list | grep lxc | awk '{print $2,$3}' liblxc1 2.0.3-0ubuntu1~ubuntu14.04.1 lxc 2.0.3-0ubuntu1~ubuntu14.04.1 lxc-common 2.0.3-0ubuntu1~ubuntu14.04.1 lxc-templates 2.0.3-0ubuntu1~ubuntu14.04.1 lxc1 2.0.3-0ubuntu1~ubuntu14.04.1 lxcfs 2.0.2-0ubuntu1~ubuntu14.04.1 python3-lxc 2.0.3-0ubuntu1~ubuntu14.04.1 root@ubuntu:~#
LXC directory installation layout
The following table shows the directory layout of LXC that is created after package and source installation. The directories vary depending on distribution and installation method.Ubuntu package | CentOS package | Source installation | Description |
/usr/share/lxc | /usr/share/lxc | /usr/local/share/lxc | LXC base directory |
/usr/share/lxc/config | /usr/share/lxc/config | /usr/local/share/lxc/config | Collection of distribution based LXC configuration files |
/usr/share/lxc/templates | /usr/share/lxc/templates | /usr/local/share/lxc/templates | Collection of container template scripts |
/usr/bin | /usr/bin | /usr/local/bin | Location for most LXC binaries |
/usr/lib/x86_64-linux-gnu | /usr/lib64 | /usr/local/lib | Location of liblxc libraries |
/etc/lxc | /etc/lxc | /usr/local/etc/lxc | Location of default LXC config files |
/var/lib/lxc/ | /var/lib/lxc/ | /usr/local/var/lib/lxc/ | Location of the root filesystem and config for created container |
/var/log/lxc | /var/log/lxc | /usr/local/var/log/lxc | LXC log files |
Building and manipulating LXC containers
Managing the container life cycle with the provided userspace tools is quite convenient compared to manually creating namespaces and applying resource limits with cgroups. In essence this is exactly what the LXC tools do, creation and manipulation of the namespaces through calls to the liblxc API and cgroups.LXC comes packaged with various templates for building root file systems for different Linux distributions. We can use them to create a variety of container flavors.
Building our first container
We can create our first container by using a template. The lxc-download file, like the rest of the templates in the templates directory, is a script written in bash:root@ubuntu:~# ls -la /usr/share/lxc/templates/
drwxr-xr-x 2 root root 4096 Aug 29 20:03 .
drwxr-xr-x 6 root root 4096 Aug 29 19:58 ..
-rwxr-xr-x 1 root root 10557 Nov 18 2015 lxc-alpine
-rwxr-xr-x 1 root root 13534 Nov 18 2015 lxc-altlinux
-rwxr-xr-x 1 root root 10556 Nov 18 2015 lxc-archlinux
-rwxr-xr-x 1 root root 9878 Nov 18 2015 lxc-busybox
-rwxr-xr-x 1 root root 29149 Nov 18 2015 lxc-centos
-rwxr-xr-x 1 root root 10486 Nov 18 2015 lxc-cirros
-rwxr-xr-x 1 root root 17354 Nov 18 2015 lxc-debian
-rwxr-xr-x 1 root root 17757 Nov 18 2015 lxc-download
-rwxr-xr-x 1 root root 49319 Nov 18 2015 lxc-fedora
-rwxr-xr-x 1 root root 28253 Nov 18 2015 lxc-gentoo
-rwxr-xr-x 1 root root 13962 Nov 18 2015 lxc-openmandriva
-rwxr-xr-x 1 root root 14046 Nov 18 2015 lxc-opensuse
-rwxr-xr-x 1 root root 35540 Nov 18 2015 lxc-oracle
-rwxr-xr-x 1 root root 11868 Nov 18 2015 lxc-plamo
-rwxr-xr-x 1 root root 6851 Nov 18 2015 lxc-sshd
-rwxr-xr-x 1 root root 23494 Nov 18 2015 lxc-ubuntu
-rwxr-xr-x 1 root root 11349 Nov 18 2015 lxc-ubuntu-cloud
root@ubuntu:~#
If you examine the scripts closely you'll notice that most of them create chroot
environments, where packages and various configuration files are then
installed to create the root filesystem for the selected distribution.Let's start by building a container using the lxc-download template, which will ask for the distribution, release and architecture, then use the appropriate template to create the file system and configuration for us:
root@ubuntu:~# lxc-create -t download -n c1
Setting up the GPG keyring
Downloading the image index
---
DIST RELEASE ARCH VARIANT BUILD
---
centos 6 amd64 default 20160831_02:16
centos 6 i386 default 20160831_02:16
centos 7 amd64 default 20160831_02:16
debian jessie amd64 default 20160830_22:42
debian jessie arm64 default 20160824_22:42
debian jessie armel default 20160830_22:42
...
ubuntu trusty amd64 default 20160831_03:49
ubuntu trusty arm64 default 20160831_07:50
ubuntu yakkety s390x default 20160831_03:49
---
Distribution: ubuntu
Release: trusty
Architecture: amd64
Unpacking the rootfs
---
You just created an Ubuntu container (release=trusty, arch=amd64, variant=default)
To enable sshd, run: apt-get install openssh-server
For security reason, container images ship without user accounts
and without a root password.
Use lxc-attach or chroot directly into the rootfs to set a root password
or create user accounts.
root@ubuntu:~#
Let's list all containers:root@ubuntu:~# lxc-ls -f
NAME STATE IPV4 IPV6 AUTOSTART
----------------------------------------------------
c1 STOPPED - - NO
root@nova-perf:~#
Depending on the version of LXC some of the command options might be different, read the man page for each of the tools if you encounter errorsOur container is currently not running, let's start it in the background and increase the log level to DEBUG:
root@ubuntu:~# lxc-start -n c1 -d -l DEBUG
On some distributions LXC does not create the host bridge when building the first container, which results in an error. If this happens you can create it by running: brctl addbr virbr0
root@ubuntu:~# lxc-ls -f
NAME STATE IPV4 IPV6 AUTOSTART
----------------------------------------------------------
c1 RUNNING 10.0.3.190 - NO
root@ubuntu:~#
To obtain more information about the container run:root@ubuntu:~# lxc-info -n c1
Name: c1
State: RUNNING
PID: 29364
IP: 10.0.3.190
CPU use: 1.46 seconds
BlkIO use: 112.00 KiB
Memory use: 6.34 MiB
KMem use: 0 bytes
Link: vethVRD8T2
TX bytes: 4.28 KiB
RX bytes: 4.43 KiB
Total bytes: 8.70 KiB
root@ubuntu:~#
The new container is now connected to the host bridge lxcbr0:root@ubuntu:~# brctl show
bridge name bridge id STP enabled interfaces
lxcbr0 8000.fea50feb48ac no vethVRD8T2
root@ubuntu:~# ip a s lxcbr0
4: lxcbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether fe:a5:0f:eb:48:ac brd ff:ff:ff:ff:ff:ff
inet 10.0.3.1/24 brd 10.0.3.255 scope global lxcbr0
valid_lft forever preferred_lft forever
inet6 fe80::465:64ff:fe49:5fb5/64 scope link
valid_lft forever preferred_lft forever
root@ubuntu:~# ip a s vethVRD8T2
8: vethVRD8T2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master lxcbr0 state UP group default qlen 1000
link/ether fe:a5:0f:eb:48:ac brd ff:ff:ff:ff:ff:ff
inet6 fe80::fca5:fff:feeb:48ac/64 scope link
valid_lft forever preferred_lft forever
root@ubuntu:~#
By using the download template and not specifying any network settings, the container obtains its IP address from a dnsmasq
server that runs on a private network, 10.0.3.0/24 in this case. The
host allows the container to connect to the rest of the network and
Internet by using NAT rules in iptables:root@ubuntu:~# iptables -L -n -t nat
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 10.0.3.0/24 !10.0.3.0/24
root@ubuntu:~#
Other
containers connected to the bridge will have access to each other and
to the host, as long as they are all connected to the same bridge and
are not tagged with different VLAN IDs.Let's see how the process tree looks like after starting the container:
root@ubuntu:~# ps axfww
…
1552 ? S 0:00 dnsmasq -u lxc-dnsmasq --strict-order --bind-interfaces --pid-file=/run/lxc/dnsmasq.pid --conf-file= --listen-address 10.0.3.1 --dhcp-range 10.0.3.2,10.0.3.254 --dhcp-lease-max=253 --dhcp-no-override --except-interface=lo --interface=lxcbr0 --dhcp-leasefile=/var/lib/misc/dnsmasq.lxcbr0.leases --dhcp-authoritative
29356 ? Ss 0:00 lxc-start -n c1 -d -l DEBUG
29364 ? Ss 0:00 \_ /sbin/init
29588 ? S 0:00 \_ upstart-udev-bridge --daemon
29597 ? Ss 0:00 \_ /lib/systemd/systemd-udevd --daemon
29667 ? Ssl 0:00 \_ rsyslogd
29688 ? S 0:00 \_ upstart-file-bridge --daemon
29690 ? S 0:00 \_ upstart-socket-bridge --daemon
29705 ? Ss 0:00 \_ dhclient -1 -v -pf /run/dhclient.eth0.pid -lf /var/lib/dhcp/dhclient.eth0.leases eth0
29775 pts/6 Ss+ 0:00 \_ /sbin/getty -8 38400 tty4
29777 pts/1 Ss+ 0:00 \_ /sbin/getty -8 38400 tty2
29778 pts/5 Ss+ 0:00 \_ /sbin/getty -8 38400 tty3
29787 ? Ss 0:00 \_ cron
29827 pts/7 Ss+ 0:00 \_ /sbin/getty -8 38400 console
29829 pts/0 Ss+ 0:00 \_ /sbin/getty -8 38400 tty1
root@ubuntu:~#
Notice the new init child process that was cloned from the lxc-start command. This is PID 1 in the actual container.Next, let's attach to the container, list all processes, network interfaces and check connectivity:
root@ubuntu:~# lxc-attach -n c1
root@c1:~# ps axfw
PID TTY STAT TIME COMMAND
1 ? Ss 0:00 /sbin/init
176 ? S 0:00 upstart-udev-bridge --daemon
185 ? Ss 0:00 /lib/systemd/systemd-udevd --daemon
255 ? Ssl 0:00 rsyslogd
276 ? S 0:00 upstart-file-bridge --daemon
278 ? S 0:00 upstart-socket-bridge --daemon
293 ? Ss 0:00 dhclient -1 -v -pf /run/dhclient.eth0.pid -lf /var/lib/dhcp/dhclient.eth0.leases eth0
363 lxc/tty4 Ss+ 0:00 /sbin/getty -8 38400 tty4
365 lxc/tty2 Ss+ 0:00 /sbin/getty -8 38400 tty2
366 lxc/tty3 Ss+ 0:00 /sbin/getty -8 38400 tty3
375 ? Ss 0:00 cron
415 lxc/console Ss+ 0:00 /sbin/getty -8 38400 console
417 lxc/tty1 Ss+ 0:00 /sbin/getty -8 38400 tty1
458 ? S 0:00 /bin/bash
468 ? R+ 0:00 ps ax
root@c1:~# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
7: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:16:3e:b2:34:8a brd ff:ff:ff:ff:ff:ff
inet 10.0.3.190/24 brd 10.0.3.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::216:3eff:feb2:348a/64 scope link
valid_lft forever preferred_lft forever
root@c1:~# ping -c 3 google.com
PING google.com (216.58.192.238) 56(84) bytes of data.
64 bytes from ord30s26-in-f14.1e100.net (216.58.192.238): icmp_seq=1 ttl=52 time=1.77 ms
64 bytes from ord30s26-in-f14.1e100.net (216.58.192.238): icmp_seq=2 ttl=52 time=1.58 ms
64 bytes from ord30s26-in-f14.1e100.net (216.58.192.238): icmp_seq=3 ttl=52 time=1.75 ms
--- google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 1.584/1.705/1.779/0.092 ms
root@c1:~# exit
exit
root@ubuntu:~#
On some distributions like CentOS, or if installed from source, the dnsmasq server is not configured and started by default. You can either install it and configure it manually, or configure the container with an IP address and a default gateway instead, as I demonstrate later in this article.Notice how the hostname changed on the terminal once we attached to the container. This is an example of how LXC uses the UTS namespaces.
Let's examine the directory that was created after building the c1 container:
root@ubuntu:~# ls -la /var/lib/lxc/c1/
total 16
drwxrwx--- 3 root root 4096 Aug 31 20:40 .
drwx------ 3 root root 4096 Aug 31 21:01 ..
-rw-r--r-- 1 root root 516 Aug 31 20:40 config
drwxr-xr-x 21 root root 4096 Aug 31 21:00 rootfs
root@ubuntu:~#
The rootfs
directory looks like a regular Linux filesystem. You can manipulate the
container directly by making changes to the files there, or by using chroot.To demonstrate this, let's change the root password of the c1 container not by attaching to it, but by using chroot rootfs:
root@ubuntu:~# cd /var/lib/lxc/c1/
root@ubuntu:/var/lib/lxc/c1# chroot rootfs
root@ubuntu:/# ls -al
total 84
drwxr-xr-x 21 root root 4096 Aug 31 21:00 .
drwxr-xr-x 21 root root 4096 Aug 31 21:00 ..
drwxr-xr-x 2 root root 4096 Aug 29 07:33 bin
drwxr-xr-x 2 root root 4096 Apr 10 2014 boot
drwxr-xr-x 4 root root 4096 Aug 31 21:00 dev
drwxr-xr-x 68 root root 4096 Aug 31 22:12 etc
drwxr-xr-x 3 root root 4096 Aug 29 07:33 home
drwxr-xr-x 12 root root 4096 Aug 29 07:33 lib
drwxr-xr-x 2 root root 4096 Aug 29 07:32 lib64
drwxr-xr-x 2 root root 4096 Aug 29 07:31 media
drwxr-xr-x 2 root root 4096 Apr 10 2014 mnt
drwxr-xr-x 2 root root 4096 Aug 29 07:31 opt
drwxr-xr-x 2 root root 4096 Apr 10 2014 proc
drwx------ 2 root root 4096 Aug 31 22:12 root
drwxr-xr-x 8 root root 4096 Aug 31 20:54 run
drwxr-xr-x 2 root root 4096 Aug 29 07:33 sbin
drwxr-xr-x 2 root root 4096 Aug 29 07:31 srv
drwxr-xr-x 2 root root 4096 Mar 13 2014 sys
drwxrwxrwt 2 root root 4096 Aug 31 22:12 tmp
drwxr-xr-x 10 root root 4096 Aug 29 07:31 usr
drwxr-xr-x 11 root root 4096 Aug 29 07:31 var
root@ubuntu:/# passwd
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
root@ubuntu:/# exit
exit
root@ubuntu:/var/lib/lxc/c1#
Notice how the path changed on the console when we used chroot and after exiting the jailed environment.To test the root password, let's install SSH server in the container by first attaching to it and then using ssh to connect:
root@ubuntu:~# lxc-attach -n c1
root@c1:~# apt-get update && apt-get install –y openssh-server
root@c1:~# sed -i 's/without-password/yes/g' /etc/ssh/sshd_config
root@c1:~# service ssh restart
root@c1:/# exit
exit
root@ubuntu:/var/lib/lxc/c1# ssh 10.0.3.190
root@10.0.3.190's password:
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-91-generic x86_64)
* Documentation: https://help.ubuntu.com/
Last login: Wed Aug 31 22:25:39 2016 from 10.0.3.1
root@c1:~# exit
logout
Connection to 10.0.3.190 closed.
root@ubuntu:/var/lib/lxc/c1#
We were able to ssh to the container and use the root password that was manually set earlier.Autostarting LXC containers
By default LXC containers do not start after a server reboot. To change that, we can use the lxc-autostart tool and the containers configuration file.To demonstrate this, let's create a new container first:
root@ubuntu:~# lxc-create --name autostart_container --template ubuntu
root@ubuntu:~# lxc-ls -f
NAME STATE AUTOSTART GROUPS IPV4 IPV6
autostart_container STOPPED 0 - - -
root@ubuntu:~#
Next, add the lxc.start.auto stanza to its config file:root@ubuntu:~# echo "lxc.start.auto = 1" >> /var/lib/lxc/autostart_container/config
root@ubuntu:~#
List all containers that are configured to start automatically:root@ubuntu:~# lxc-autostart --list
autostart_container
root@ubuntu:~#
Now we can use the lxc-autostart command again to start all containers configured to autostart, in this case just one:root@ubuntu:~# lxc-autostart --all
root@ubuntu:~# lxc-ls -f
NAME STATE AUTOSTART GROUPS IPV4 IPV6
autostart_container RUNNING 1 - 10.0.3.98 –
root@ubuntu:~#
Two
other useful autostart configuration parameters are adding a delay to
the start and defining a group in which multiple containers can start as
a single unit. Stop the container and add the following to
configuration options:root@ubuntu:~# lxc-stop --name autostart_container
root@ubuntu:~# echo "lxc.start.delay = 5" >> /var/lib/lxc/autostart_container/config
root@ubuntu:~# echo "lxc.group = high_priority" >> /var/lib/lxc/autostart_container/config
root@ubuntu:~#
Next, lets list the containers configured to autostart again:root@ubuntu:~# lxc-autostart --list
root@ubuntu:~#
Notice
that no containers showed from the preceding output. This is because
our container now belongs to an autostart group. Let's specify it:root@ubuntu:~# lxc-autostart --list --group high_priority
autostart_container 5
root@ubuntu:~#
Similarly to start all containers belong to a given autostart group:root@ubuntu:~# lxc-autostart --group high_priority
root@ubuntu:~# lxc-ls -f
NAME STATE AUTOSTART GROUPS IPV4 IPV6
autostart_container RUNNING 1 high_priority 10.0.3.98 -
root@ubuntu:~#
In order for lxc-autostart to automatically start containers after a server reboot, it first needs to be started. This can be achieved by either adding the preceding command in crontab, or creating an init script.Finally in order to clean up run:
root@ubuntu:~# lxc-destroy --name autostart_container
Destroyed container autostart_container
root@ubuntu:~# lxc-ls -f
root@ubuntu:~#
LXC container hooks
LXC provides a convenient way to execute programs during the life cycle of containers. The following table summarizes the various configuration options available to allow for this feature:Option | Description |
lxc.hook.pre-start | A hook to be run in the host namespace before the container ttys, consoles, or mounts are loaded. |
lxc.hook.pre-mount | A hook to be run in the container's filesystem namespace, but before the rootfs has been set up. |
lxc.hook.mount | A hook to be run in the container after mounting has been done, but before the pivot_root. |
lxc.hook.autodev | A hook to be run in the container after mounting has been done and after any mount hooks have run, but before the pivot_root. |
lxc.hook.start | A hook to be run in the container right before executing the container's init. |
lxc.hook.stop | A hook to be run in the host's namespace after the container has been shut down. |
lxc.hook.post-stop | A hook to be run in the host's namespace after the container has been shut down. |
lxc.hook.clone | A hook to be run when the container is cloned. |
lxc.hook.destroy | A hook to be run when the container is destroyed. |
First, create the container and add the lxc.hook.pre-start option to its configuration file:
root@ubuntu:~# lxc-create --name hooks_container --template ubuntu
root@ubuntu:~# echo "lxc.hook.pre-start = /var/lib/lxc/hooks_container/pre_start.sh" >> /var/lib/lxc/hooks_container/config
root@ubuntu:~#
Next, create a simple bash script and make it executable:root@ubuntu:~# cat /var/lib/lxc/hooks_container/pre_start.sh
#!/bin/bash
LOG_FILE=/tmp/container.log
echo "Container name: $LXC_NAME" | tee -a $LOG_FILE
echo "Container mounted rootfs: $LXC_ROOTFS_MOUNT" | tee -a $LOG_FILE
echo "Container config file $LXC_CONFIG_FILE" | tee -a $LOG_FILE
echo "Container rootfs: $LXC_ROOTFS_PATH" | tee -a $LOG_FILE
root@ubuntu:~#
root@ubuntu:~# chmod u+x /var/lib/lxc/hooks_container/pre_start.sh
root@ubuntu:~#
Start
the container and check the contents of the file that the bash script
should have written to, ensuring the script got triggered:root@ubuntu:~# lxc-start --name hooks_container
root@ubuntu:~# lxc-ls -f
NAME STATE AUTOSTART GROUPS IPV4 IPV6
hooks_container RUNNING 0 - 10.0.3.237 -
root@ubuntu:~# cat /tmp/container.log
Container name: hooks_container
Container mounted rootfs: /usr/lib/x86_64-linux-gnu/lxc
Container config file /var/lib/lxc/hooks_container/config
Container rootfs: /var/lib/lxc/hooks_container/rootfs
root@ubuntu:~#
From
the preceding output we can see that the script got triggered when we
started the container and the value of the LXC variables got written to
the temp file.Attaching directories from the host OS and exploring the running filesystem of a container
The root filesystem of LXC containers is visible from the host OS as a regular directory tree. We can directly manipulate files in a running container by just making changes in that directory. LXC also allows for attaching directories from the host OS inside of the container using bind mount. A bind mount is a different view of the directory tree. It achieves this by replicating the existing directory tree under a different mount point.To demonstrate this, let's create a new container, directory and a file on the host:
root@ubuntu:~# mkdir /tmp/export_to_container
root@ubuntu:~# hostname -f > /tmp/export_to_container/file
root@ubuntu:~# lxc-create --name mount_container --template ubuntu
root@ubuntu:~#
Next, we are going to use the lxc.mount.entry
option in the configuration file of the container, telling LXC what
directory to bind mount from the host and the mount point inside the
container to bind to:root@ubuntu:~# echo "lxc.mount.entry = /tmp/export_to_container/ /var/lib/lxc/mount_container/rootfs/mnt none ro,bind 0 0" >> /var/lib/lxc/mount_container/config
root@ubuntu:~#
Once the container is started we can see that the /mnt inside of it now contains the file that we created in /tmp/export_to_container directory on the host OS earlier:root@ubuntu:~# lxc-start --name mount_container
root@ubuntu:~# lxc-attach --name mount_container
root@mount_container:~# cat /mnt/file
ubuntu
root@mount_containerr:~# exit
exit
root@ubuntu:~#
When an LXC container is in a running state some files are only visible from /proc on the host OS. To examine the running directory of a container, first grab its PID:root@ubuntu:~# lxc-info --name mount_container
Name: mount_container
State: RUNNING
PID: 8594
IP: 10.0.3.237
CPU use: 1.96 seconds
BlkIO use: 212.00 KiB
Memory use: 8.50 MiB
KMem use: 0 bytes
Link: vethBXR2HO
TX bytes: 4.74 KiB
RX bytes: 4.73 KiB
Total bytes: 9.46 KiB
root@ubuntu:~#
With the PID in hand we can examine the running directory of the container:root@ubuntu:~# ls -la /proc/8594/root/run/
total 44
drwxr-xr-x 10 root root 420 Sep 14 23:28 .
drwxr-xr-x 21 root root 4096 Sep 14 23:28 ..
-rw-r--r-- 1 root root 4 Sep 14 23:28 container_type
-rw-r--r-- 1 root root 5 Sep 14 23:28 crond.pid
---------- 1 root root 0 Sep 14 23:28 crond.reboot
-rw-r--r-- 1 root root 5 Sep 14 23:28 dhclient.eth0.pid
drwxrwxrwt 2 root root 40 Sep 14 23:28 lock
-rw-r--r-- 1 root root 112 Sep 14 23:28 motd.dynamic
drwxr-xr-x 3 root root 180 Sep 14 23:28 network
drwxr-xr-x 3 root root 100 Sep 14 23:28 resolvconf
-rw-r--r-- 1 root root 5 Sep 14 23:28 rsyslogd.pid
drwxr-xr-x 2 root root 40 Sep 14 23:28 sendsigs.omit.d
drwxrwxrwt 2 root root 40 Sep 14 23:28 shm
drwxr-xr-x 2 root root 40 Sep 14 23:28 sshd
-rw-r--r-- 1 root root 5 Sep 14 23:28 sshd.pid
drwxr-xr-x 2 root root 80 Sep 14 23:28 udev
-rw-r--r-- 1 root root 5 Sep 14 23:28 upstart-file-bridge.pid
-rw-r--r-- 1 root root 4 Sep 14 23:28 upstart-socket-bridge.pid
-rw-r--r-- 1 root root 5 Sep 14 23:28 upstart-udev-bridge.pid
drwxr-xr-x 2 root root 40 Sep 14 23:28 user
-rw-rw-r-- 1 root utmp 2688 Sep 14 23:28 utmp
root@ubuntu:~#
Make sure you replace the PID with the output of lxc-info from your host, as it will differ from the above example.In order to make persistent changes in the root filesystem of a container, modify the files in /var/lib/lxc/mount_container/rootfs/ instead.
Freezing a running container
LXC takes advantage of the freezer cgroup to freeze all the processes running inside of a container. The processes will be in a blocked state until thawed. Freezing a container can be useful in cases where the system load is high and you want to free some resources without actually stopping the container and preserve its running state.Ensure you have a running container and check its state from the freezer cgroup:
root@ubuntu:~# lxc-ls -f
NAME STATE AUTOSTART GROUPS IPV4 IPV6
hooks_container RUNNING 0 - 10.0.3
root@ubuntu:~# cat /sys/fs/cgroup/freezer/lxc/hooks_container/freezer.state
THAWED
root@ubuntu:~#
Notice how a currently running container shows as thawed. Let's freeze it:root@ubuntu:~# lxc-freeze -n hooks_container
root@ubuntu:~# lxc-ls -f
NAME STATE AUTOSTART GROUPS IPV4 IPV6
hooks_container FROZEN 0 - 10.0.3.237 –
root@ubuntu:~#
The container state shows as frozen, let's check the cgroup file:root@ubuntu:~# cat /sys/fs/cgroup/freezer/lxc/hooks_container/freezer.state
FROZEN
root@ubuntu:~#
To unfreeze it run:root@ubuntu:~# lxc-unfreeze --name hooks_container
root@ubuntu:~# lxc-ls -f
NAME STATE AUTOSTART GROUPS IPV4 IPV6
hooks_container RUNNING 0 - 10.0.3.237 -
root@ubuntu:~# cat /sys/fs/cgroup/freezer/lxc/hooks_container/freezer.state
THAWED
root@ubuntu:~#
We can monitor the state change by running the lxc-monitor
command on a separate console while freezing and unfreezing a
container. The change of the container's state will show as the
following:root@ubuntu:~# lxc-monitor --name hooks_container
'hooks_container' changed state to [FREEZING]
'hooks_container' changed state to [FROZEN]
'hooks_container' changed state to [THAWED]
Limiting container resource usage
LXC comes with tools that are just as straightforward and easy to use.Lets start by setting up the available memory to a container to 512MB:
root@ubuntu:~# lxc-cgroup -n hooks_container memory.limit_in_bytes 536870912
root@ubuntu:~#
We can verify that the new setting has been applying by directly inspecting the memory cgroup for the container:root@ubuntu:~# cat /sys/fs/cgroup/memory/lxc/hooks_container/memory.limit_in_bytes
536870912
root@ubuntu:~#
Changing
the value only requires running the same command again. Let's change
the available memory to 256 MB and inspect the container by attaching to
it and running the free utility:root@ubuntu:~# lxc-cgroup -n hooks_container memory.limit_in_bytes 268435456
root@ubuntu:~# cat /sys/fs/cgroup/memory/lxc/hooks_container/memory.limit_in_bytes
268435456
root@ubuntu:~# lxc-attach --name hooks_container
root@hooks_container:~# free -m
total used free shared buffers cached
Mem: 256 63 192 0 0 54
-/+ buffers/cache: 9 246
Swap: 0 0 0
root@hooks_container:~# exit
root@ubuntu:~#
As the preceding output shows the container only sees 256 MB of total available memory.Similarly we can pin a CPU core to container. In the next example our test server has two cores. Let's allow the container to only run on core 0:
root@ubuntu:~# cat /proc/cpuinfo | grep processor
processor : 0
processor : 1
root@ubuntu:~#
root@ubuntu:~# lxc-cgroup -n hooks_container cpuset.cpus 0
root@ubuntu:~# cat /sys/fs/cgroup/cpuset/lxc/hooks_container/cpuset.cpus
0
root@ubuntu:~# lxc-attach --name hooks_container
root@hooks_container:~# cat /proc/cpuinfo | grep processor
processor : 0
root@hooks_container:~# exit
exit
root@ubuntu:~#
By attaching to the container and checking the available CPUs we see that only one is presented, as expected.To make changes persist server reboots we need to add them to the configuration file of the container:
root@ubuntu:~# echo "lxc.cgroup.memory.limit_in_bytes = 536870912" >> /var/lib/lxc/hooks_container/config
root@ubuntu:~#
Setting
various other cgroup parameters is done in a similar way. For example
let's see the CPU shares and the block IO on a container:root@ubuntu:~# lxc-cgroup -n hooks_container cpu.shares 512
root@ubuntu:~# lxc-cgroup -n hooks_container blkio.weight 500
root@ubuntu:~# lxc-cgroup -n hooks_container blkio.weight
500
root@ubuntu:~#
Комментариев нет:
Отправить комментарий