Ubuntu 18.04

From naumann.news

Set up a ceph experimentation environment

Basic setup

We ensure first for each node that it has the most current versions installed.

apt update && apt upgrade -y

Install basic dependencies.

apt install vim screen parted curl chrony hdparm hddtemp net-tools -y

User configuration

Create the ceph user on each node.

useradd -m ceph-user
usermod -a -G sudo ceph-user
usermod --shell /bin/bash ceph-user

We need to ensure that the admin node can sudo on each node without password.

echo "ceph-user ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph-user
sudo chmod 0440 /etc/sudoers.d/ceph-user

Set the password for the ceph user.

passwd ceph-user

Sensors

You can detect and read your sensors with the following tools.

apt install lm-sensors -y
sensors-detect

Admin node

We install first our admin node, we will install ceph-deploy here to install the other nodes.

Network configuration

Enable forwarding

sysctl -w net.ipv4.ip_forward=1

Network configuration via netplan

vim /etc/netplan/01-netcfg.yaml

I am using a dedicated internal network, this needs some configuration to set static ips and to ensure the routing.

network:
  version: 2
  renderer: networkd
  ethernets:
    enp1s0:
      dhcp4: yes
    enp2s0:
      addresses: [172.16.10.21/24]
      routes:
        - to: 0.0.0.0/0
          via: 192.168.1.1
          metric: 120
        - to: 172.16.10.0/0
          via: 172.16.10.1
          metric: 110

Generate certificates

We will first generate a certificate for the ceph-user on the admin node.

su - ceph-user
ssh-keygen

Create on the admin node a folder for out cluster files.

mkdir ceph

Hosts

Open the hosts configuration file.

vim /etc/hosts

Add the all hosts of your ceph environment.

192.168.1.21    apu01
192.168.1.22    apu02
192.168.1.23    apu03
192.168.1.24    apu04
192.168.1.25    apu05
192.168.1.26    apu06
192.168.1.27    apu07

SSH key distribution and configuration

Copy the ssh key to all nodes in your cluster.

ssh-copy-id ceph-user@apu01
ssh-copy-id ceph-user@apu02
ssh-copy-id ceph-user@apu03
ssh-copy-id ceph-user@apu04
ssh-copy-id ceph-user@apu05
ssh-copy-id ceph-user@apu06
ssh-copy-id ceph-user@apu07

Open the ssh config file.

vim ~/.ssh/config

Add all nodes to you ssh config file.

Host apu01
   Hostname apu01
   User ceph-user
Host apu02
   Hostname apu02
   User ceph-user
Host apu03
   Hostname apu03
   User ceph-user
Host apu04
   Hostname apu04
   User ceph-user
Host apu05
   Hostname apu05
   User ceph-user
Host apu06
   Hostname apu06
   User ceph-user
Host apu07
   Hostname apu07
   User ceph-user

Python setup

Install python dependencies as administrative user.

apt install python-routes python-dev python3-dev python-pip python3-pip -y

Upgrade the python environment.

pip install --upgrade pip

Ceph-deploy installation

Install ceph-deply on the admin node.

pip install ceph-deploy

Ceph installation

Install ceph on all nodes via ceph deploy from the admin node with the ceph user

ceph-deploy install apu01 apu02 apu03 apu04 apu05 apu06 apu07

Create monitors

ceph-deploy new apu04 apu05 apu06

Initialize the monitors to create a quorum

ceph-deploy mon create-initial

Admin setup

Add the configuration for the cluster to all nodes

ceph-deploy --overwrite-conf admin apu01 apu02 apu03 apu04 apu05 apu06 apu07

Manager setup

ceph-deploy mgr create apu04 apu05 apu06

OSD setup

Partition and format the blockdevices on each osd node

parted -s /dev/sdb mklabel gpt mkpart primary xfs 0% 100%
mkfs.xfs -f /dev/sdb

Enable the osds on the admin node via ceph-deploy

ceph-deploy osd create --data /dev/sdb apu01
ceph-deploy osd create --data /dev/sdb apu02
ceph-deploy osd create --data /dev/sdb apu03

MDS preparation

As root user on any node run

apt install ceph-mds -y
ceph-deploy mds create apu07 apu06
ceph osd pool create cephfs_data 100
ceph osd pool create cephfs_metadata 100
ceph fs new cephfs cephfs_metadata cephfs_data
ceph mgr module enable dashboard
ceph dashboard create-self-signed-cert

Set the ip addresses for each manager node

ceph config set mgr mgr/dashboard/apu04/server_addr 192.168.1.24
ceph config set mgr mgr/dashboard/apu04/server_port 8080
ceph config set mgr mgr/dashboard/apu05/server_addr 192.168.1.25
ceph config set mgr mgr/dashboard/apu05/server_port 8080
ceph config set mgr mgr/dashboard/apu06/server_addr 192.168.1.26
ceph config set mgr mgr/dashboard/apu06/server_port 8080

Restart the dashboard

ceph mgr module disable dashboard
ceph mgr module enable dashboard

Add a user

ceph dashboard set-login-credentials <username> <password>

Ceph client with cephfs

On the client node create a folder where you want to mount cephfs to

mkdir /mnt/cephfs

Get data from admin node

On the admin node as ceph-user run

cat ceph.client.admin.keyring

This will give you the secret-key, on the client run

mount -t ceph 192.168.1.24:6789:/ /mnt/cephfs -o name=admin,secret=<secret-key>

Reset environment

If you want to restart with the setup you can reset the cluster with the following commands

ceph-deploy purge apu01 apu02 apu03 apu04 apu05 apu06 apu07
ceph-deploy purgedata apu01 apu02 apu03 apu04 apu05 apu06 apu07
ceph-deploy forgetkeys
rm ceph.*

Shutdown ceph cluster

ceph osd set noout
ceph osd set nobackfill
ceph osd set norecover

[^1]: http://docs.ceph.com/docs/master/start/hardware-recommendations/