2.14. Performing Container-Specific Operations¶
This section describes operations specific to Virtuozzo Hybrid Server containers.
2.14.1. Reinstalling Containers¶
Reinstalling a container may help if any required container files have been inadvertently modified, replaced, or deleted, resulting in container malfunction. You can reinstall a container by using the prlctl reinstall
command that creates a new container private area from scratch according to its configuration file and relevant OS and application templates. For example:
# prlctl reinstall MyCT
To keep the personal data from the old container, the utility also copies the old private area contents to the /vz/root/<UUID>/old
directory of the new private area (unless the --no-backup
option is given). This directory may be deleted after you copy the personal data where you need.
The prlctl reinstall
command retains user credentials base, unless the --resetpwdb
option is specified.
2.14.1.1. Customizing Container Reinstallation¶
The default reinstallation, as performed by the prlctl reinstall
command, creates a new private area for the broken container as if it were created by the prlctl create
command and copies the private area of the broken container to the /old
directory in the new private area so that no file is lost. There is also a possibility of deleting the old private area altogether without copying or mounting it inside the new private area, which is done by means of the --no-backup
option. This way of reinstalling corrupted containers might in certain cases not correspond exactly to your particular needs. It happens when you are accustomed to creating new containers in some other way than just using the prlctl create
command. For example, you may install additional software licenses into new containers, or anything else. In this case you would naturally like to perform reinstallation in such a way so that the broken container is reverted to its original state as determined by you, and not by the default behavior of the prlctl create
command.
To customize reinstallation, you should write your own scripts determining what should be done with the container when it is being reinstalled, and what should be configured inside the container after it has been reinstalled. These scripts should be named vps.reinstall
and vps.configure
, respectively, and should be located in the /etc/vz/conf
directory on the hardware node. To facilitate your task of creating customized scripts, the container software is shipped with sample scripts that you may use as the basis of your own scripts.
When the prlctl reinstall <UUID>
command is called, it searches for the vps.reinstall
and vps.configure
scripts and launches them consecutively. When the vps.reinstall
script is launched, the following parameters are passed to it:
Option |
Description |
---|---|
|
Container UUID. |
|
The path to the container temporary private area. This path designates where a new private area is temporarily created for the container. If the script runs successfully, this private area is mounted to the path of the original private area after the script has finished. |
|
The path to the container original private area. |
You may use these parameters within your vps.reinstall
script.
If the vps.reinstall
script finishes successfully, the container is started, and the vps.configure
script is called. At this moment the old private area is mounted to the /old
directory inside the new one irrespective of the --no-backup
option. This is done in order to let you use the necessary files from the old private area in your script, which is to be run inside the running container. For example, you might want to copy some files from there to regular container directories.
After the vps.configure
script finishes, the old private area is either dismounted and deleted or remains mounted depending on whether the --no-backup
option was provided.
If you do not want to run these reinstallation scripts and want to stick to the default prlctl reinstall
behavior, you may do either of the following:
Remove the
vps.reinstall
andvps.configure
scripts from the/etc/vz/conf
directory, or at least rename them.Modify the last line of the
vps.reinstall
script so that it would readexit 128
instead ofexit 0
.
The exit code 128
tells the utility not to run the scripts and to reinstall the container with the default behavior.
2.14.2. Repairing Containers¶
Note
The repair mode does not work for virtual machines.
If a container malfunctions, starting it in the repair mode may help you fix it. Do the following:
Start the broken (original) container as follows:
# prlctl start <original_CT> --repair
A temporary container will be created with the same name, parameters, and user accounts as the original container. The temporary container will start and the original container’s root will be mounted to
/repair
in it.Note
Virtuozzo PowerPanel may not detect this operation.
Invite the container owner to log in to the temporary container as they would log in to the original one. The container owner can now save the critical data from
/repair
or try to fix the container, with or without your help.Important
Warn the container owner to never save the critical data to the temporary container. It will be automatically destroyed when stopped.
When the critical data has been saved or container has been fixed, stop the temporary container:
# prlctl stop <original_CT>
The original container’s root will be unmounted and the temporary container will be destroyed.
If the original container has been fixed, start it so the owner can log in to it as usual. If the container owner saved the critical data and the original container cannot be fixed, reinstall it and invite the owner to upload the saved data back.
2.14.3. Enabling VPN for Containers¶
Virtual Private Network (VPN) is a technology which allows you to establish a secure network connection even over an insecure public network. Setting up a VPN for a separate container is possible via the TUN/TAP device. To allow a particular container to use this device, do the following:
Make sure the
tun.o
module is already loaded before Virtuozzo Hybrid Server is started:# lsmod | grep 'tun'
Allow the container to use the TUN/TAP device:
# vzctl set MyCT --devnodes net/tun:rw --save
Configuring the VPN properly is a common Linux administration task, which is out of the scope of this guide. Some popular Linux software for setting up a VPN over the TUN/TAP driver includes Virtual TUNnel and OpenVPN.
2.14.4. Setting Up NFS Server in Containers¶
To set up an NFS server in a container, do the following:
Make sure the
rpcbind
,nfsd
, andnfslock
services are installed in the container.Enable the NFS server feature for the container by running the
prlctl set --features nfsd:on
command on the hardware node. For example:# prlctl set MyCT --features nfsd:on
If the container is running, stop it first. After enabling the feature, restart the container.
Note
You cannot create snapshots of containers with the enabled NFS server feature.
When performing a live migration of containers with the enabled NFS server feature, the NFS service is stopped before dump and restarted after restore.
Custom services that depend on the NFS server feature (i.e., that stop with NFS but are not restarted when the NFS server is up again) will be stopped after migration.
Start the
rpcbind
service in the container.# service rpcbind start Starting rpcbind: [ OK ]
Start the
nfs
andnfslock
services in the container.# service nfs start Starting NFS services: [ OK ] Starting NFS quotas: [ OK ] Starting NFS mountd: [ OK ] Starting NFS daemon: [ OK ] # service nfslock start Starting NFS statd: [ OK ]
You can now set up NFS shares in the configured container.
2.14.6. Managing Container Virtual Disks¶
Note
You can manage virtual disks of both stopped and running containers.
Virtuozzo Hybrid Server allows you to perform the following operations on container virtual disks:
Add new virtual disks to containers.
Configure the virtual disk properties.
Remove virtual disks from containers.
2.14.6.1. Adding Virtual Disks to Containers¶
New containers are created with a single virtual hard disk, but you can add more disks as follows:
Attach a new or existing image file that emulates a hard disk drive.
Attach a physical hard disk of the host server.
2.14.6.1.1. Using Image Files¶
You can either attach an existing image to the container or create a new one and keep it at a custom location, e.g., on a regular disk or in a Virtuozzo Storage cluster. Thus you can create more flexible containers with the operating system on a fast SSD and user data on redundant Virtuozzo Storage.
To create a new image file and add it to a container as a virtual hard disk, use the prlctl set --device-add hdd
command. For example:
# prlctl set MyCT --device-add hdd --size 100G --mnt /userdisk
Note
If you omit the --mnt
option, the disk will be added unmounted.
This command adds to the configuration of the container MyCT
a virtual hard disk with the following parameters:
Name:
hdd<N>
where<N>
is the next available disk index.Default image location:
/vz/private/<CT_UUID>/harddisk<N>.hdd
where<N>
is the next available disk index.Size: 102400 MB.
Mount point inside the container
MyCT
:/userdisk
. A corresponding entry is also added to container’s/etc/fstab
file.
To attach an existing image file to a container as a virtual hard disk, specify the path to the image file with the --image
option. For example:
# prlctl set MyCT --device-add hdd --image /hdd/MyCT.hdd --size 100G --mnt /userdisk
2.14.6.1.2. Attaching Physical Hard Disks¶
You can attach to a container any physical block device available on the physical server, whether it is a local hard disk or an external device connected via Fibre Channel or iSCSI.
Note
A physical block device must be formatted and have only one file system before it can be attached to a container.
You will need to specify the path to the device, which you can find out the with the prlsrvctl info
command. For example:
# prlsrvctl info
...
Hardware info:
hdd WDC WD1002FAEX-0 ATA (/dev/sda2) '/dev/disk/by-id/lvm-pv-uuid-RDYrbU-<...>'
hdd WDC WD1002FAEX-0 ATA (/dev/sda) '/dev/disk/by-id/wwn-0x50014ee25a3df4dc'
cdrom PIONEER DVD-RW DVR-220 '/dev/sr0'
net eth0 'eth0'
serial /dev/ttyS0 '/dev/ttyS0'
...
Once you know the path to the physical block device, you can attach it to a container with the prlctl set --device-add hdd --device
command. For example:
# prlctl set MyCT --device-add hdd --device '/dev/disk/by-id/wwn-0x50014ee25a3df4dc' \
--mnt /userdisk
Note
If you omit the --mnt
option, the disk will be added unmounted.
This command adds to the configuration of the container MyCT
a virtual hard disk with the following parameters:
Name:
hdd<N>
where<N>
is the next available disk index.Path to the device:
/dev/disk/by-id/wwn-0x50014ee25a3df4dc
wherewwn-0x50014ee25a3df4dc
is a storage device unique identifier.Mount point inside the container:
/userdisk
. A corresponding entry is also added to container’s/etc/fstab
file.
Note
Before migrating containers with external hard drives, make sure that corresponding physical disks exist on the destination server and are available by the same name (for this purpose, use persistent naming, for example, via
/dev/disk/by-id/
),During container backup operations, physical disks connected to the container are not backed up.
If you use multipath from a system to a device, it is recommended to use the
user_friendly_names no
feature so that multipath devices have names persistent across all nodes in a cluster.
2.14.6.2. Configuring Container Virtual Disks¶
To configure the parameters of a virtual disk attached to a container, use the prlctl set --device-set
command.
You will need to specify the disk name, which you can find out the with the prlctl list -i
command. For example:
# prlctl list -i MyCT | grep "hdd"
hdd0 (+) scsi:0 image='/vz/private/9fd3eee7-70fe-43e3-9295-1ab29fe6dba5/root.hdd' type='expanded' 10240Mb mnt=/ subtype=virtio-scsi
hdd1 (+) scsi:1 real='/dev/disk/by-id/wwn-0x50014ee25a3df4dc' mnt=/userdisk subtype=virtio-scsi
Once you know the virtual device name, you can configure its properties. For example, to change the type of the virtual disk hdd0
in the container MyCT
from SCSI to IDE, execute:
# prlctl set MyCT --device-set hdd0 --iface ide
To check that the virtual disk type has been changed, use the prlctl list -i
command. For example:
# prlctl list -i MyCT | grep "hdd0"
hdd0 (+) ide:0 image='/vz/private/9fd3eee7-70fe-43e3-9295-1ab29fe6dba5/root.hdd' type='expanded' 10240Mb mnt=/
2.14.6.3. Deleting Virtual Disks from Containers¶
You can delete a virtual hard disk from a container with the prlctl set --device-del
command.
You will need to specify the disk name, which you can find out the with the prlctl list -i
command. For example:
# prlctl list -i MyCT | grep "hdd"
hdd0 (+) scsi:0 image='/vz/private/9fd3eee7-70fe-43e3-9295-1ab29fe6dba5/root.hdd' type='expanded' 10240Mb mnt=/ subtype=virtio-scsi
hdd1 (+) scsi:1 real='/dev/disk/by-id/wwn-0x50014ee25a3df4dc' mnt=/userdisk subtype=virtio-scsi
Once you know the virtual device name, you can remove it from your container. For example, to remove the virtual disk hdd1
from the container MyCT
, execute:
# prlctl set MyCT --device-del hdd1
2.14.7. Restarting Containers¶
You can restart containers from the inside using typical Linux commands, e.g., reboot
or shutdown -r
. Restarting is handled by the vzeventd
daemon.
If necessary, you can keep containers from starting again after the reboot
command has been executed from the inside, as follows:
To disable restarting for a specific container, add the
ALLOWREBOOT="no"
line to the container configuration file (/etc/vz/conf/<UUID>.conf
).To disable restarting globally for all containers on the server, add the
ALLOWREBOOT="no"
line to the global configuration file (/etc/vz/vz.conf
).To disable restarting globally except for specific containers, add the
ALLOWREBOOT="no"
line to the global configuration file (/etc/vz/vz.conf
) and explicitly specifyALLOWREBOOT="yes"
in the configuration files of the respective containers.
As a result, a container with the ALLOWREBOOT
option set to "no"
retains the mounted status after the reboot
command has been executed inside the container.
2.14.8. Creating SimFS-based Containers¶
In Virtuozzo Hybrid Server 7, the simfs layout is based on bindmounts. When a simfs-based container is started, its private area is bindmounted to the root container area.
To create a simfs container:
Set
VEFSTYPE=simfs
in/etc/vz/vz.conf
.Run
prlctl create <CT_name>
.
The limitations of simfs in Virtuozzo Hybrid Server 7 are:
No support for first- or second-level quotas.
No support for live migration of simfs-based containers.
2.14.9. Bind-Mounting Host Directories Inside Containers¶
You can bind-mount a host directory inside a container using vzctl
:
# vzctl set <CT> --bindmount_add <host_dir>:<CT_dir> --save
For example, to bind-mount a host directory /vz/host_dir
to a container directory /home/ct_dir
, run
# vzctl set MyCT --bindmount_add /vz/host_dir:/home/ct_dir --save
Such a bind mount is permanent and will be saved after a container restart and during container backup.
To remove a bind mount, use --bindmount_del
:
# vzctl set MyCT --bindmount_del /home/ct_dir
Note the following:
Containers with bind mounts cannot be migrated.
If the container is stopped, the host directory will be bind-mounted or unmounted on container start. If the container is running, the host directory will be bind-mounted or unmounted immediately, no container restart will be required.
If the specified container directory does not exist, it will be created on bind-mount. It will not, however, be deleted on unmount.
If the specified container directory is not empty on bind-mount, it will be replaced by the host directory until it is unmounted.
The bind-mounted directory will obey file system permissions. A user with insufficient permissions inside the container will not be able to access it.
2.14.10. Checking Consistency of Container File System¶
You can check the consistency of a container’s file system from the host. The container can be stopped or running to avoid downtime. Use the command
# ploop fscheck <CT_home>/root.hdd/DiskDescriptor.xml
Where <CT_home>
is the container’s directory.
For example, for a container MyCT
:
# prlctl list -i MyCT | grep Home
Home: /vz/private/788aff6e-cb2f-4dec-9d61-ee3105fc90d7
# ploop fscheck /vz/private/788aff6e-cb2f-4dec-9d61-ee3105fc90d7/root.hdd/DiskDescriptor.xml
<...>
Running: fsck.ext4 -f -n /dev/ploop58356p1
<...>
No error found on /dev/ploop58356p1