2.14. Performing Container-specific Operations

This section provides the description of operations specific to containers.

2.14.1. Reinstalling Containers

Reinstalling a container may help if any required container files have been inadvertently modified, replaced, or deleted, resulting in container malfunction. You can reinstall a container by using the prlctl reinstall command that creates a new container private area from scratch according to its configuration file and relevant OS and application templates. For example:

# prlctl reinstall MyCT

To keep the personal data from the old container, the utility also copies the old private area contents to the /vz/root/<UUID>/old directory of the new private area (unless the --skipbackup option is given). This directory may be deleted after you copy the personal data where you need.

The prlctl reinstall command retains user credentials base, unless the --resetpwdb option is specified.

2.14.1.1. Customizing Container Reinstallation

The default reinstallation, as performed by the prlctl reinstall command, creates a new private area for the broken container as if it were created by the prlctl create command and copies the private area of the broken container to the /old directory in the new private area so that no file is lost. There is also a possibility of deleting the old private area altogether without copying or mounting it inside the new private area, which is done by means of the --skipbackup option. This way of reinstalling corrupted containers might in certain cases not correspond exactly to your particular needs. It happens when you are accustomed to creating new containers in some other way than just using the prlctl create command. For example, you may install additional software licenses into new containers, or anything else. In this case you would naturally like to perform reinstallation in such a way so that the broken container is reverted to its original state as determined by you, and not by the default behavior of the prlctl create command.

To customize reinstallation, you should write your own scripts determining what should be done with the container when it is being reinstalled, and what should be configured inside the container after it has been reinstalled. These scripts should be named vps.reinstall and vps.configure, respectively, and should be located in the /etc/vz/conf directory on the hardware node. To facilitate your task of creating customized scripts, the containers software is shipped with sample scripts that you may use as the basis of your own scripts.

When the prlctl reinstall <UUID> command is called, it searches for the vps.reinstall and vps.configure scripts and launches them consecutively. When the vps.reinstall script is launched, the following parameters are passed to it:

Option Description
--veid Container UUID.
--ve_private_tmp The path to the container temporary private area. This path designates where a new private area is temporarily created for the container. If the script runs successfully, this private area is mounted to the path of the original private area after the script has finished.
--ve_private The path to the container original private area.

You may use these parameters within your vps.reinstall script.

If the vps.reinstall script finishes successfully, the container is started, and the vps.configure script is called. At this moment the old private area is mounted to the /old directory inside the new one irrespective of the --skipbackup option. This is done in order to let you use the necessary files from the old private area in your script, which is to be run inside the running container. For example, you might want to copy some files from there to regular container directories.

After the vps.configure script finishes, the old private area is either dismounted and deleted or remains mounted depending on whether the --skipbackup option was provided.

If you do not want to run these reinstallation scripts and want to stick to the default prlctl reinstall behavior, you may do either of the following:

  • Remove the vps.reinstall and vps.configure scripts from the /etc/vz/conf directory, or at least rename them;
  • Modify the last line of the vps.reinstall script so that it would read exit 128 instead of exit 0.

The exit code 128 tells the utility not to run the scripts and to reinstall the container with the default behavior.

2.14.2. Enabling VPN for Containers

Virtual Private Network (VPN) is a technology which allows you to establish a secure network connection even over an insecure public network. Setting up a VPN for a separate container is possible via the TUN/TAP device. To allow a particular container to use this device, do the following:

  1. Make sure the tun.o module is already loaded before Virtuozzo is started:

    # lsmod | grep 'tun'
    
  2. Allow the container to use the TUN/TAP device:

    # vzctl set MyCT --devnodes net/tun:rw --save
    

Configuring the VPN properly is a common Linux administration task, which is out of the scope of this guide. Some popular Linux software for setting up a VPN over the TUN/TAP driver includes Virtual TUNnel and OpenVPN.

2.14.3. Setting Up NFS Server in Containers

To set up an NFS server in a container, do the following:

  1. Make sure the rpcbind, nfsd, and nfslock services are installed in the container.

  2. Enable the NFS server feature for the container by running the prlctl set --features nfsd:on command on the hardware node. For example:

    # prlctl set MyCT --features nfsd:on
    

    If the container is running, stop it first. After enabling the feature, restart the container.

    Note

    You cannot perform live migration or create snapshots of containers with enabled NFS server feature.

  3. Start the rpcbind service in the container.

    # service rpcbind start
    Starting rpcbind:                                          [  OK  ]
    
  4. Start the nfs and nfslock services in the container.

    # service nfs start
    Starting NFS services:                                     [  OK  ]
    Starting NFS quotas:                                       [  OK  ]
    Starting NFS mountd:                                       [  OK  ]
    Starting NFS daemon:                                       [  OK  ]
    # service nfslock start
    Starting NFS statd:                                        [  OK  ]
    

You can now set up NFS shares in the configured container.

2.14.4. Mounting NFS Shares on Container Start

If you configured an NFS share in the /etc/fstab file of a CentOS or RHEL-based container, and you need this NFS share to be mounted on container start, enable autostart for the netfs service with the chkconfig netfs on command.

2.14.5. Managing Container Virtual Disks

Note

You can manage virtual disks of both stopped and running containers.

Virtuozzo allows you to perform the following operations on container virtual disks:

  • add new virtual disks to containers,
  • configure the virtual disk properties,
  • remove virtual disks from containers.

2.14.5.1. Adding Virtual Disks to Containers

New containers are created with a single virtual hard disk, but you can add more disks as follows:

  • attach a new or existing image file that emulates a hard disk drive, or
  • attach a physical hard disk of the host server.

2.14.5.1.1. Using Image Files

You can either attach an existing image to the container or create a new one and keep it at a custom location, e.g., on a regular disk or in a Virtuozzo Storage cluster. This allows creating more flexible containers, in which the operating system may be kept on a fast SSD and user data may be stored on redundant Virtuozzo Storage.

To create a new image file and add it to a container as a virtual hard disk, use the prlctl set --device-add hdd command. For example:

# prlctl set MyCT --device-add hdd --size 100G --mnt /userdisk

Note

If you omit the --mnt option, the disk will be added unmounted.

This command adds to the configuration of the container MyCT a virtual hard disk with the following parameters:

  • name: hdd<N> where <N> is the next available disk index,
  • default image location: /vz/private/<CT_UUID>/harddisk<N>.hdd where <N> is the next available disk index,
  • size: 102400 MB,
  • mount point inside the container MyCT: /userdisk. A corresponding entry is also added to container’s /etc/fstab file.

To attach an existing image file to a container as a virtual hard disk, specify the path to the image file with the --image option. For example:

# prlctl set MyCT --device-add hdd --image /hdd/MyCT.hdd --size 100G --mnt /userdisk

2.14.5.1.2. Attaching Physical Hard Disks

You can attach to a container any physical block device available on the physical server, whether it is a local hard disk or an external device connected via Fibre Channel or iSCSI.

Note

A physical block device must be formatted and have only one filesystem before it can be attached to a container.

You will need to specify the path to the device, which you can find out the with the prlsrvctl info command. For example:

# prlsrvctl info
...
Hardware info:
     hdd  WDC WD1002FAEX-0 ATA (/dev/sda2)         '/dev/disk/by-id/lvm-pv-uuid-RDYrbU-YZsH-uS8w-aH0t-EH9W-6dir-ea9lDL'
     hdd  WDC WD1002FAEX-0 ATA (/dev/sda)          '/dev/disk/by-id/wwn-0x50014ee25a3df4dc'
   cdrom  PIONEER DVD-RW  DVR-220                  '/dev/sr0'
     net  eth0                                     'eth0'
  serial  /dev/ttyS0                               '/dev/ttyS0'
...

Once you know the path to the physical block device, you can attach it to a container with the prlctl set --device-add hdd --device command. For example:

# prlctl set MyCT --device-add hdd --device '/dev/disk/by-id/wwn-0x50014ee25a3df4dc' --mnt /userdisk

Note

If you omit the --mnt option, the disk will be added unmounted.

This command adds to the configuration of the container MyCT a virtual hard disk with the following parameters:

  • name: hdd<N> where <N> is the next available disk index,
  • path to the device: /dev/disk/by-id/wwn-0x50014ee25a3df4dc where wwn-0x50014ee25a3df4dc is a storage device unique identifier,
  • mount point inside the container: /userdisk. A corresponding entry is also added to container’s /etc/fstab file.

Note

  1. Before migrating containers with external hard drives, make sure that corresponding physical disks exist on the destination server and are available by the same name (for this purpose, use persistent naming, for example, via /dev/disk/by-id/),
  2. During container backup operations, physical disks connected to the container are not backed up.
  3. If you use multipath from a system to a device, it is recommended to use the user_friendly_names no feature so that multipath devices have names persistent across all nodes in a cluster.

2.14.5.2. Configuring Container Virtual Disks

To configure the parameters of a virtual disk attached to a container, use the prlctl set --device-set command.

You will need to specify the disk name, which you can find out the with the prlctl list -i command. For example:

# prlctl list -i MyCT | grep "hdd"
hdd0 (+) scsi:0 image='/vz/private/9fd3eee7-70fe-43e3-9295-1ab29fe6dba5/root.hdd' type='expanded' 10240Mb mnt=/ subtype=virtio-scsi
hdd1 (+) scsi:1 real='/dev/disk/by-id/wwn-0x50014ee25a3df4dc' mnt=/userdisk subtype=virtio-scsi

Once you know the virtual device name, you can configure its properties. For example, to change the type of the virtual disk hdd0 in the container MyCT from SCSI to IDE, execute:

# prlctl set MyCT --device-set hdd0 --iface ide

To check that the virtual disk type has been changed, use the prlctl list -i command. For example:

# prlctl list -i MyCT | grep "hdd0"
hdd0 (+) ide:0 image='/vz/private/9fd3eee7-70fe-43e3-9295-1ab29fe6dba5/root.hdd' type='expanded' 10240Mb mnt=/

2.14.5.3. Deleting Virtual Disks from Containers

You can delete a virtual hard disk from a container with the prlctl set --device-del command.

You will need to specify the disk name, which you can find out the with the prlctl list -i command. For example:

# prlctl list -i MyCT | grep "hdd"
hdd0 (+) scsi:0 image='/vz/private/9fd3eee7-70fe-43e3-9295-1ab29fe6dba5/root.hdd' type='expanded' 10240Mb mnt=/ subtype=virtio-scsi
hdd1 (+) scsi:1 real='/dev/disk/by-id/wwn-0x50014ee25a3df4dc' mnt=/userdisk subtype=virtio-scsi

Once you know the virtual device name, you can remove it from your container. For example, to remove the virtual disk hdd1 from the container MyCT, execute:

# prlctl set MyCT --device-del hdd1

2.14.6. Restarting Containers

You can restart containers from the inside using typical Linux commands, e.g., reboot or shutdown -r. Restarting is handled by the vzeventd daemon.

If necessary, you can forbid restarting containers from the inside as follows:

  • To disable restarting for a specific container, add the ALLOWREBOOT="no" line to the container configuration file (/etc/vz/conf/<UUID>.conf).
  • To disable restarting globally for all containers on the server, add the ALLOWREBOOT="no" line to the global configuration file (/etc/vz/vz.conf).
  • To disable restarting globally except for specific containers, add the ALLOWREBOOT="no" line to the global configuration file (/etc/vz/vz.conf) and explicitly specify ALLOWREBOOT="yes" in the configuration files of the respective containers.

2.14.7. Creating SimFS-based Containers

In Virtuozzo 7, the simfs layout is based on bindmounts. When a simfs-based container is started, its private area is bindmounted to the root container area.

To create a simfs container:

  1. Set VEFSTYPE=simfs in /etc/vz/vz.conf.
  2. Run prlctl create <CT_name>.

The limitations of simfs in Virtuozzo 7 are:

  1. No support for first- or second-level quotas.
  2. No support for live migration of simfs-based containers.