Linux Containers: Difference between revisions

From Chorke Wiki
Jump to navigation Jump to search
 
(5 intermediate revisions by the same user not shown)
Line 1: Line 1:
  sudo apt upgrade && sudo apt autoremove
  sudo apt upgrade && sudo apt autoremove
  sudo snap install lxd --channel=4.0/stable
  sudo snap install lxd --channel=6.1/stable
sudo snap install lxd --channel=5.10/stable
  sudo usermod -aG lxd ${USER}
  sudo usermod -aG lxd ${USER}


Line 9: Line 8:


==Initial Setup==
==Initial Setup==
<syntaxhighlight lang="bash" highlight="3,6,12">
{|
| valign="top" |
<syntaxhighlight lang="bash" highlight="5,9,13,15,18">
sudo lxd init
sudo lxd init
:'
:'
Would you like to use LXD clustering? (yes/no) [default=no]: yes
Would you like to use LXD clustering? (yes/no) [default=no]:  
What IP address or DNS name should be used to reach this node? [default=10.20.21.10]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:  
Are you joining an existing cluster? (yes/no) [default=no]:  
Name of the new storage pool [default=default]: lxd-btrfs-pool-aa
What name should be used to identify this node in the cluster? [default=ubuntu]: academia
Name of the storage backend to use (btrfs, ceph, dir, lvm, powerflex) [default=btrfs]:  
Setup password authentication on the cluster? (yes/no) [default=no]:
Create a new BTRFS pool? (yes/no) [default=yes]:  
Do you want to configure a new local storage pool? (yes/no) [default=yes]:  
Name of the storage backend to use (btrfs, dir, lvm, zfs) [default=zfs]:  
Create a new ZFS pool? (yes/no) [default=yes]:  
Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]:  
Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]:  
Size in GB of the new loop device (1GB minimum) [default=25GB]: 100GB
Size in GiB of the new loop device (1GiB minimum) [default=30GiB]: 100GiB
Do you want to configure a new remote storage pool? (yes/no) [default=no]:  
Would you like to connect to a MAAS server? (yes/no) [default=no]:  
Would you like to connect to a MAAS server? (yes/no) [default=no]:  
Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]:  
Would you like to create a new local network bridge? (yes/no) [default=yes]:
Would you like to create a new Fan overlay network? (yes/no) [default=yes]:  
What should the new bridge be called? [default=lxdbr0]:
What subnet should be used as the Fan underlay? [default=auto]:  
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 10.20.0.1/24
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]  
Would you like LXD to NAT IPv4 traffic on your bridge? [default=yes]:  
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:  
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: none
Would you like the LXD server to be available over the network? (yes/no) [default=no]:  
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]:
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes
'
'
</syntaxhighlight>
</syntaxhighlight>
----
<syntaxhighlight lang="bash">
lxc image  ls images:debian    bookworm $(dpkg --print-architecture)
lxc image  ls images:ubuntu    noble    $(dpkg --print-architecture)
lxc image  ls images:alpine    3.20    $(dpkg --print-architecture)
lxc image  ls images:opensuse  15.6    $(dpkg --print-architecture)
lxc image  ls images:archlinux current  $(dpkg --print-architecture)
lxc image  ls
lxc storage ls
lxc network ls
lxc ls
</syntaxhighlight>
| valign="top" |
<syntaxhighlight lang="yaml" highlight="4,6,13,15,28" line>
config: {}
networks:
- config:
    ipv4.address: 10.20.0.1/24
    ipv4.nat: "true"
    ipv6.address: none
  description: ""
  name: lxdbr0
  type: ""
  project: default
storage_pools:
- config:
    size: 100GiB
  description: ""
  name: lxd-btrfs-pool-aa
  driver: btrfs
storage_volumes: []
profiles:
- config: {}
  description: ""
  devices:
    eth0:
      name: eth0
      network: lxdbr0
      type: nic
    root:
      path: /
      pool: lxd-btrfs-pool-aa
      type: disk
  name: default
projects: []
cluster: null
</syntaxhighlight>
|}


==Management==
==Management==
Line 434: Line 484:
* [[Vagrant]]
* [[Vagrant]]
* [[MAAS]]
* [[MAAS]]
* [[Snap]]
* [[CIDR]]
* [[CIDR]]
* [[UFW]]
* [[UFW]]

Latest revision as of 17:43, 19 October 2024

sudo apt upgrade && sudo apt autoremove
sudo snap install lxd --channel=6.1/stable
sudo usermod -aG lxd ${USER}
sudo snap start lxd
sudo snap stop lxd
sudo snap logs lxd

Initial Setup

sudo lxd init
:'
Would you like to use LXD clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: lxd-btrfs-pool-aa
Name of the storage backend to use (btrfs, ceph, dir, lvm, powerflex) [default=btrfs]: 
Create a new BTRFS pool? (yes/no) [default=yes]: 
Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]: 
Size in GiB of the new loop device (1GiB minimum) [default=30GiB]: 100GiB
Would you like to connect to a MAAS server? (yes/no) [default=no]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
What should the new bridge be called? [default=lxdbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 10.20.0.1/24
Would you like LXD to NAT IPv4 traffic on your bridge? [default=yes]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: none
Would you like the LXD server to be available over the network? (yes/no) [default=no]: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: 
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes
'

lxc image   ls images:debian    bookworm $(dpkg --print-architecture)
lxc image   ls images:ubuntu    noble    $(dpkg --print-architecture)
lxc image   ls images:alpine    3.20     $(dpkg --print-architecture)
lxc image   ls images:opensuse  15.6     $(dpkg --print-architecture)
lxc image   ls images:archlinux current  $(dpkg --print-architecture)
lxc image   ls

lxc storage ls
lxc network ls
lxc ls
config: {}
networks:
- config:
    ipv4.address: 10.20.0.1/24
    ipv4.nat: "true"
    ipv6.address: none
  description: ""
  name: lxdbr0
  type: ""
  project: default
storage_pools:
- config:
    size: 100GiB
  description: ""
  name: lxd-btrfs-pool-aa
  driver: btrfs
storage_volumes: []
profiles:
- config: {}
  description: ""
  devices:
    eth0:
      name: eth0
      network: lxdbr0
      type: nic
    root:
      path: /
      pool: lxd-btrfs-pool-aa
      type: disk
  name: default
projects: []
cluster: null

Management

lxc image list images: ubuntu amd64
lxc image list images: ubuntu arm64
lxc image list images: ubuntu
lxc launch images:alpine/3.20 academia --vm
lxc launch images:alpine/3.20 academia
lxc restart academia
lxc start academia
lxc stop academia
lxc launch images:opensuse/15.6 agronomy --vm
lxc launch images:opensuse/15.6 agronomy
lxc restart agronomy
lxc start agronomy
lxc stop agronomy
lxc launch images:apertis/v2020 robotics
lxc launch images:apertis/v2021 robotics
lxc restart robotics
lxc start robotics
lxc stop robotics
lxc list -c nsS
lxc list
lxc exec academia -- su --login chorke
lxc exec academia -- /bin/sh
lxc exec academia sh
lxc console academia
download from container to host
lxc file pull academia/etc/hosts ./

upload into container from host
lxc file push -r ~/.ssh/ academia/root/.ssh/
lxc file push ~/.ssh/known_hosts academia/root/.ssh/
lxc stop academia && lxc delete academia
lxc delete academia --force
manipulate remote images
lxc image refresh ubuntu:21.04
lxc image delete ubuntu:21.04
lxc image show ubuntu:21.04
lxc image edit ubuntu:21.04
manipulate local images
lxc image delete 97c97f4a1d2d
lxc image delete a7b1071c0609

Create Alias

lxc launch images:alpine/3.20 academia &&
cat <<'EXE' | lxc exec academia -- sh
sleep 5
apk --update add --no-cache curl \
openjdk21-jre; java -version
EXE

create alias:
lxc stop     academia
lxc publish  academia --alias\
 alpine/3.20:curl:java

create alias from snapshot:
lxc snapshot academia curl:java &&
lxc publish  academia/curl:java --alias\
 alpine/3.20:curl:java
lxc delete   academia

launch alias:
lxc launch   alpine/3.20:curl:java academia &&
lxc exec     academia sh
lxc stop     academia && lxc delete academia
lxc launch images:opensuse/15.6 agronomy &&
cat <<'EXE' | lxc exec agronomy -- bash
sleep 5
zypper install -y curl java-21-openjdk
java -version
EXE

create alias:
lxc stop     agronomy
lxc publish  agronomy --alias\
 opensuse/15.6:curl:java

create alias from snapshot:
lxc snapshot agronomy curl:java &&
lxc publish  agronomy/curl:java --alias\
 opensuse/15.6:curl:java
lxc delete   agronomy

launch alias:
lxc launch   opensuse/15.6:curl:java agronomy &&
lxc exec     agronomy bash
lxc stop     agronomy && lxc delete agronomy
lxc launch images:oracle/9 economia &&
cat <<'EXE' | lxc exec economia -- bash
sleep 5
dnf install -y curl java-11-openjdk
java -version
EXE

create alias:
lxc stop     economia
lxc publish  economia --alias\
 oracle/9:curl:java

create alias from snapshot:
lxc snapshot economia curl:java &&
lxc publish  economia/curl:java --alias\
 oracle/9:curl:java
lxc delete   economia

launch alias:
lxc launch   oracle/9:curl:java economia &&
lxc exec     economia bash
lxc stop     economia && lxc delete economia


lxc launch images:fedora/40 robotics &&
cat <<'EXE' | lxc exec robotics -- bash
sleep 5
dnf install -y curl java-21-openjdk
java -version
EXE

create alias:
lxc stop     robotics
lxc publish  robotics --alias\
 fedora/40:curl:java

create alias from snapshot:
lxc snapshot robotics curl:java &&
lxc publish  robotics/curl:java --alias\
 fedora/40:curl:java
lxc delete   robotics

launch alias:
lxc launch   fedora/40:curl:java robotics &&
lxc exec     robotics bash
lxc stop     robotics && lxc delete robotics
lxc launch images:ubuntu/noble/desktop software --vm &&
cat <<'EXE' | lxc exec software -- bash
sleep 5
apt-get install -y curl openjdk-21-jre
java -version
EXE

create alias:
lxc stop     software
lxc publish  software --alias\
 ubuntu/24.04:curl:java

create alias from snapshot:
lxc snapshot software curl:java &&
lxc publish  software/curl:java --alias\
 ubuntu/24.04:curl:java
lxc delete   software

launch alias:
lxc launch   ubuntu/24.04:curl:java software &&
lxc exec     software bash
lxc stop     software && lxc delete software
lxc launch images:archlinux travelia &&
cat <<'EXE' | lxc exec travelia -- sh
sleep 5
pacman --noconfirm -S curl jre21-openjdk
java -version
EXE

create alias:
lxc stop     travelia
lxc publish  travelia --alias\
 archlinux:curl:java

create alias from snapshot:
lxc snapshot travelia curl:java &&
lxc publish  travelia/curl:java --alias\
 archlinux:curl:java
lxc delete   travelia

launch alias:
lxc launch   archlinux:curl:java travelia &&
lxc exec     travelia sh
lxc stop     travelia && lxc delete travelia

Knowledge

lxc list
snap info lxd
journalctl -xe

snap revert lxd
snap list --all lxd
sudo snap remove lxd

journalctl -u snap.lxd.daemon
systemctl  reload snap.lxd.daemon
systemctl  status snap.lxd.daemon
troubleshoot network
sudo tcpdump -i eth0 -vn icmp
sudo nmcli c up System\ eth0

ps uax | grep systemd-udev
ps aux | grep dnsmasq
ip addr flush

ip addr show
ip a
ip r
sudo snap install lxd
sudo snap install lxd --channel=3.0/stable
sudo snap install lxd --channel=4.0/stable
sudo snap install lxd --channel=5.0/stable

sudo groupadd --system lxd
sudo usermod -G lxd -a $USER
newgrp lxd
mount | grep cgroup
systemd.unified_cgroup_hierarchy=0
cat /var/snap/lxd/common/lxd/logs/lxd.log

sudo apt update  && sudo apt list --upgradable 
sudo apt upgrade && sudo apt autoremove
sudo apt install snapd
sudo zfs list -t all
sudo ps fauxww

sudo lxd.migrate
sudo lxd init --auto
sudo lxd init

lxc remote list
lxc image  list
lxc image list images:
lxc image list images: alpine
lxc image list images: alpine arm64
lxc image list images: alpine amd64

lxc image list images: debian
lxc image list images: debian arm64
lxc image list images: debian amd64

lxc image list images: oracle
lxc image list images: oracle arm64
lxc image list images: oracle amd64
lxc image list images: ubuntu
lxc image list images: ubuntu arm64
lxc image list images: ubuntu amd64

sudo ls -la /var/snap/lxd/common/lxd/images/
sudo cat /var/snap/lxd/common/lxcfs.pid
sudo systemctl stop snap.lxd.daemon
sudo apt install zfsutils-linux

sudo lxc --verbose --debug list
sudo lxd --debug --group lxd
sudo ps fauxww | grep lx

lxc launch images:alpine/3.17 academia
lxc snapshot academia install:curl
lxc snapshot academia install:java
lxc info academia
lxc launch images:opensuse/15.3 agronomy
lxc snapshot agronomy install:curl
lxc snapshot agronomy install:java
lxc info agronomy
lxc launch images:ubuntu/22.04 software
lxc snapshot agronomy install:curl
lxc snapshot agronomy install:java
lxc info software

sudo snap refresh lxd --channel=latest
sudo snap refresh lxd --channel=stable
lxc --version
lxd --version

ls -lah /etc/dnsmasq.d-available/lxd/
ls -lah /etc/dnsmasq.d/lxd/
LXD_BRIDGE_IP=$(ip addr show lxdbr0 | awk '/inet / {print $2}' | cut -d '/' -f 1)
cat << EOF | sudo tee /etc/dnsmasq.d-available/lxd >>/dev/null
server=/lxd/${LXD_BRIDGE_IP}
EOF

sudo ln -s /etc/dnsmasq.d-available/lxd /etc/dnsmasq.d/lxd
sudo rm -f /etc/dnsmasq.d/lxd

References