LXD Initialization and Configuration

Install, initialize and configure LXD.

This guide provides instructions for installing, initializing and configuring LXD. LXD is the default lightweight hypervisor for creating and managing lightweight VM’s in the form of Linux (LXC) containers.


Install LXD

Most nodes in this project come in the form of LXD Linux containers. For this we need to create LXD profile microk8s and network bridge lxdbr1. The microk8s profile is necessary for installing MicroK8s on c2d-ks1, c2d-ks2, c2d-ks3.

LXD is default installed on Ubuntu 22.04 but we have to initialize it using lxd init. Accept all defaults by pressing return key. Note: the init will create a bridge lxdbr0.

sudo apt install curl -y
sudo snap install lxd
curl -s -L https://gitlab.com/c2platform/ansible/-/raw/master/doc/howto-development/lxd-init-preseed.yml | lxd init --preseed

Create microk8s profile

Create the microk8s profile.

lxc profile create microk8s
curl -s -L https://gitlab.com/c2platform/ansible/-/raw/master/doc/howto-development/microk8s.profile.txt | lxc profile edit microk8s

Create lxdbr1 bridge

Create bridge lxdbr1 and add it to default profile.

lxc network create lxdbr1 ipv6.address=none ipv4.address=1.1.4.1/24 ipv4.nat=true
lxc network attach-profile lxdbr1 default eth1

Set https_address

Make LXD available to Vagrant LXD provider by setting httpd_address and configuring trust. Without this step you will see message on vagrant up similar to

The LXD provider could not authenticate to the daemon at https://127.0.0.1:8443.

lxc config set core.https_address [::]:8443

Vagrant synced folders

Allow support for LXD synced folders by modifying /etc/subuid and /etc/subgid. Without this step you will see message on vagrant up.

The host machine does not support LXD synced folders

echo root:1000:1 | sudo tee -a /etc/subuid
echo root:1000:1 | sudo tee -a /etc/subgid

Dir storage pool

On Ubuntu 22 the default storage pool uses zfs driver which is nice for production type environments but cumbersome for development purposes. For this reason this project uses custom storage pool c2d which uses the dir driver.

lxc storage create c2d dir
lxc profile edit default  # and change pool to c2d
Show me
onknows@io3:~/git/gitlab/vagrant-lxd$ lxc storage create c2d dir
Storage pool c2d created
onknows@io3:~$ lxc storage ls
+---------+--------+--------------------------------------------+-------------+---------+---------+
|  NAME   | DRIVER |                   SOURCE                   | DESCRIPTION | USED BY |  STATE  |
+---------+--------+--------------------------------------------+-------------+---------+---------+
| c2d     | dir    | /var/snap/lxd/common/lxd/storage-pools/c2d |             | 0       | CREATED |
+---------+--------+--------------------------------------------+-------------+---------+---------+
| default | zfs    | /var/snap/lxd/common/lxd/disks/default.img |             | 0       | CREATED |
+---------+--------+--------------------------------------------+-------------+---------+---------+
onknows@io3:~$ lxc profile edit default  # change pool to c2d
onknows@io3:~$ lxc profile show default
config: {}
description: Default LXD profile
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
  eth1:
    nictype: bridged
    parent: lxdbr1
    type: nic
  root:
    path: /
    pool: c2d
    type: disk
name: default
used_by: []

LXC trust

Vagrant up the first node

c2
vagrant up c2d-rproxy1

This will cause the error message

The LXD provider could not authenticate to the daemon at https://127.0.0.1:8443.

Which you can fix with

lxc config trust add ~/.vagrant.d/data/lxd/client.crt