Kubespray is an open-source project that provides Ansible playbooks for the automation of Kubernetes cluster deployment, configuration, and management. It is designed to work with multiple platforms, including cloud providers, virtual machines, and bare-metal servers. Kubespray aims to be a one-stop-shop for all things related to Kubernetes installation and configuration. This article is a step-by-step guide of using Kubespray in practice for integrate own bare metal k8s instances. Let’s go!
firstly, clone Kubespray repository:
git clone https://github.com/kubernetes-sigs/kubespray.git
before start any playbook, you have to CHECKOUT on last (latest) release branch (because of master branch is not latest in this case):
git pull origin release-2.22;
git checkout release-2.22;
and it need integration like that:
VENVDIR=kubespray-venv
KUBESPRAYDIR=kubespray
ANSIBLE_VERSION=2.12
python3 -m venv $VENVDIR
source $VENVDIR/bin/activate
cd $KUBESPRAYDIR
pip install -U -r requirements-$ANSIBLE_VERSION.txt
another step is define inventory file (hosts) in inventory/<cluster.name>/inventory.ini file either for e.g.:
## Configure 'ip' variable to bind kubernetes services on a
## different ip than the default iface
[all]
node1 ansible_host=95.54.0.12 ip=10.3.0.1
node2 ansible_host=95.54.0.13 ip=10.3.0.2
node3 ansible_host=95.54.0.14 ip=10.3.0.3
node4 ansible_host=95.54.0.15 ip=10.3.0.4
node5 ansible_host=95.54.0.16 ip=10.3.0.5
node6 ansible_host=95.54.0.17 ip=10.3.0.6
[kube_control_plane]
node1
node2
[etcd]
node1
node2
node3
[kube_node]
node2
node3
node4
node5
node6
[k8s_cluster:children]
kube_node
kube_control_plane
or:
[all]
void.node.000 ansible_host=192.168.1.108
void.node.001 ansible_host=192.168.1.110
void.node.002 ansible_host=192.168.1.109
ansible_user=root ansible_password=<password>
after that we can use playbooks like that:
# Define Cluster Name
CLUSTER_NAME='<cluster.name>'
# Copy inventory/sample as inventory/$CLUSTER_NAME
cp -rfp inventory/sample inventory/$CLUSTER_NAME
# Update Ansible inventory file with inventory builder
declare -a IPS=(<node.000_IP> <node.001_IP> <node.002_IP>)
CONFIG_FILE=inventory/$CLUSTER_NAME/inventory.ini python3 contrib/inventory_builder/inventory.py ${IPS[@]}
mv inventory/$CLUSTER_NAME/inventory.ini inventory/$CLUSTER_NAME/hosts.yml
check inventory/$CLUSTER_NAME/hosts.yml and add ansible_user / ansible_password / ansible_ssh_private_key_file for correct login with ansible to these hosts:
all:
hosts:
k8s.node.000:
ansible_host: 116.202.9.70
ip: 10.0.0.2
access_ip: 10.0.0.2
ansible_user: root
ansible_ssh_private_key_file: /home/tbs093a/.ssh/hetzner_accesses
k8s.node.001:
ansible_host: 65.21.145.241
ip: 10.0.0.3
access_ip: 10.0.0.3
ansible_user: root
ansible_ssh_private_key_file: /home/tbs093a/.ssh/hetzner_accesses
k8s.node.002:
ansible_host: 128.140.1.120
ip: 10.0.0.4
access_ip: 10.0.0.4
ansible_user: root
ansible_ssh_private_key_file: /home/tbs093a/.ssh/hetzner_accesses
children:
kube_control_plane:
hosts:
k8s.node.000:
k8s.node.001:
kube_node:
hosts:
k8s.node.000:
k8s.node.001:
k8s.node.002:
etcd:
hosts:
k8s.node.000:
k8s.node.001:
k8s.node.002:
k8s_cluster:
children:
kube_control_plane:
kube_node:
calico_rr:
hosts: {}
ip and access_ip are a addresses which are used by nodes for self communication. ansible_host is a public ip which will be used for outside communication (for sharing web guis like nginx / jenkins / etc), and will be used by kubespray ansible scripts.
after that execute commands below:
# Review and change parameters under 'inventory/mycluster/group_vars'
cat inventory/$CLUSTER_NAME/group_vars/all/all.yml
cat inventory/$CLUSTER_NAME/group_vars/k8s_cluster/k8s-cluster.yml
check also inventory/$CLUSTER_NAME/group_vars/k8s_cluster/addons.yml because of this file contains information about which plugins (like ingress / helm / cert-manager) will be enabled (more information about that in ./kubernetes/cluster.addons.manifests/readme.md). after that just execute:
# Deploy Kubespray with Ansible Playbook - run the playbook as root
# The option '--become' is required, as for example writing SSL keys in /etc/,
# installing packages and interacting with various systemd daemons.
# Without '--become' the playbook will fail to run!
sudo ansible-playbook -i inventory/$CLUSTER_NAME/hosts.yml --become --become-user=root cluster.yml -vvv
as you can see on node1 server for example:
root@node1:~# kubectl get no
NAME STATUS ROLES AGE VERSION
node1 Ready control-plane 83m v1.26.3
node2 Ready control-plane 82m v1.26.3
node3 Ready <none> 80m v1.26.3
everything works fine. right now we need kubectl connection for this cluster usage on outside.
let’s copy created config file by ansible scripts on your one of all nodes who have control-pane status:
scp root@192.168.1.250:~/.kube/config ~/.kube/$CLUSTER_NAME.config
let’s save it and execute this command:
CLUSTER_NAME='hetzner.ubuntu.cluster';
export KUBECONFIG=~/.kube/$CLUSTER_NAME.config;
only one thing you must change in $CLUSTER_NAME.config is server address:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <CA.data>
server: https://<control.pane.node.IP>:6443 # <<<< here
name: cluster.localcontexts:
- context:
cluster: cluster.local
user: kubernetes-admin
name: kubernetes-admin@cluster.local
current-context: kubernetes-admin@cluster.local
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: <CC.data>
client-key-data: <CK.data>
as you can see on local computer:
╭ tbs093a@void.node.00:~
╰ λ kubectl get no
NAME STATUS ROLES AGE VERSION
node1 Ready control-plane 99m v1.26.3
node2 Ready control-plane 98m v1.26.3
node3 Ready <none> 97m v1.26.3
everything works fine.
# Problem Solving
## problems during instalation
### no-log issue
if you have problem with :/home/tbs093a/Projects/cloud.config/ansible/distro.configs/kubespray/roles/download/tasks/download_file.yml:85
fatal: [node2]: FAILED! => {
"attempts": 4, "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result",
"changed": true
}
just add to this task parameter (from `- value on +) like that to kubespray/roles/download/tasks/download_file.yml in task named as Download_file | Download item`:
- no-log: "{{ not (unsafe_show_logs | bool) }}"
+ no-log: false
repeat it for other tasks who returns mentioned error
source:
https://github.com/kubernetes-sigs/kubespray/issues/9037
### checksums issues
if you have problems with calico checksums you must change hash value (from exactl in 628 line:-` value on `+`) in `roles/download/defaults/main.yml
- v3.25.1: 361b0e0e6d64156f0e1b2fbfd18d13217d188eee614eec5de6b05ac0deaab372
+ v3.25.1: 4d6b6653499f24f80a85a0a7dac28d9571cabfa25356b08f3b438fd97e322e2d
sources:
https://github.com/kubernetes-sigs/kubespray/pull/9990/commits/d063a92b3acfe4dfb6528dd5e0745141e80f0644
https://github.com/kubernetes-sigs/kubespray/pull/9990
and if you have problem like that:
fatal: [node1]: FAILED! => {"attempts": 4, "changed": true, "checksum_dest": null, "checksum_src": "d2a906b097098b3936f4614bb5a460b08687cf74", "dest": "/tmp/releases/crictl-v1.26.0-linux-arm64.tar.gz", "elapsed": 0, "msg": "The checksum for /tmp/releases/crictl-v1.26.0-linux-arm64.tar.gz did not match b632ca705a98edc8ad7806f4279feaff956ac83aa109bba8a85ed81e6b900599; it was cda5e2143bf19f6b548110ffba0fe3565e03e8743fadd625fee3d62fc4134eed.", "src": "/root/.ansible/tmp/ansible-moduletmp-16
92384020.8382897-uj4i01y1/tmp48d0648t", "url": "https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz"}
just change b632ca705a98edc8ad7806f4279feaff956ac83aa109bba8a85ed81e6b900599 to cda5e2143bf19f6b548110ffba0fe3565e03e8743fadd625fee3d62fc4134eed for crictl or each package you see in error in roles/download/defaults/main/checksums.yml
if some node have different architecture than others, checksums will not worked. just change node architecture on the same value for all nodes or use multi-architecture kubespray: https://github.com/yuha0/kubespray
source: https://github.com/kubernetes-sigs/kubespray/issues/7934
### diferent nodes systems (than ubuntu)
if you have different system than ubuntu (void linux in this case) – you must make some changes in configurations:
– comment all of them lines who are showed below:kubespray/roles/kubernetes/preinstall/tasks/0040-verify-settings.yml
# - name: Stop if non systemd OS type
# assert:
# that: ansible_service_mgr == "systemd"
# when: not ignore_assert_errors
# - name: Stop if the os does not support
# assert:
# that: (allow_unsupported_distribution_setup | default(false)) or ansible_distribution in supported_os_distributions
# msg: "{{ ansible_distribution }} is not a known OS"
# when: not ignore_assert_errors
you can get error like that:
TASK [kubernetes/preinstall : Install packages requirements] **********************************************
task path: /home/tbs093a/Projects/cloud.config/ansible/distro.configs/kubespray/roles/kubernetes/preinstall/tasks/0070-system-packages.yml:80
fatal: [node2]: FAILED! => {
"attempts": 4,
"changed": false,
"invocation": {
"module_args": {
"name": [
"openssl",
"curl",
"rsync",
"socat",
"unzip",
"e2fsprogs",
"xfsprogs",
"ebtables",
"bash-completion",
"tar",
"ipvsadm",
"ipset"
],
"recurse": false,
"state": "present",
"update_cache": true,
"upgrade": false,
"upgrade_xbps": true
}
},
"msg": "failed to install 7 packages(s)",
"packages": [
"rsync",
"socat",
"unzip",
"ebtables",
"bash-completion",
"ipvsadm",
"ipset"
]
}
it means you cannot get soime packages – you can try install it with binaries.
### to use the ‘ssh’ connection type with passwords or pkcs11_provider, you must install the sshpass program
in this case you need install package via system package managersshpass
### Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support this. Please add this host’s fingerprint to your known_hosts file to manage this host.
it needs change:ansible.cfg
host_key_checking=False
### ERROR: Ansible requires the locale encoding to be UTF-8; Detected None.
in this case, you can add envs below to :/etc/environment
LC_ALL=en_US.UTF-8
LANG=en_US.UTF-8
and it will works (just restart terminal)
### Warning: Permanently added ‘github.com,140.82.121.4’ (ECDSA) to the list of known hosts.\r\nPermission denied (publickey).\r\nfatal: Could not read from remote repository.\n\nPlease make sure you have the correct access rights\nand the repository exists.\n
add:
AllowAgentForwarding yes
into file:~/.ssh/config` on integrated machine, and modify `ansible.cfg
[defaults]
transport = ssh
[ssh_connection]
ssh_args = -o ForwardAgent=yes







One response to “#K8S #Kubespray”
Hi, this is a comment.
To get started with moderating, editing, and deleting comments, please visit the Comments screen in the dashboard.
Commenter avatars come from Gravatar.