Kubernetes Cluster On Ubuntu VirtualBox: A Step-by-Step Guide
Setting up a Kubernetes cluster on Ubuntu using VirtualBox is a fantastic way to learn about Kubernetes, test deployments, or even run small-scale applications locally. This guide will walk you through the entire process, step by step, ensuring you have a functional cluster by the end. Let's dive in!
Prerequisites
Before we get started, make sure you have the following:
- VirtualBox: You can download it from the VirtualBox website.
- Ubuntu ISO: Grab the latest Ubuntu Server ISO from the Ubuntu website.
- Sufficient Resources: Ensure your machine has enough RAM (at least 8GB is recommended) and CPU cores (4 or more) to comfortably run multiple virtual machines.
Step 1: Creating the Virtual Machines
First, we'll create three virtual machines: one for the Kubernetes master node and two for the worker nodes. This setup provides a basic but functional cluster for learning and experimentation.
Creating the Master Node
- Open VirtualBox and click "New".
- Name: Give your VM a descriptive name like "k8s-master".
- Type: Select "Linux".
- Version: Choose "Ubuntu (64-bit)".
- Memory size: Allocate at least 4GB of RAM.
- Hard disk: Create a virtual hard disk (VDI), dynamically allocated, with a size of at least 20GB.
- Select the created VM and click "Settings".
- Go to "Storage", under "Controller: IDE", click on "Empty", then on the right side, click the CD icon and choose your Ubuntu ISO file.
- Go to "Network", and in "Attached To:", select "Bridged Adapter". Choose your network adapter (e.g., Ethernet or Wi-Fi). This allows your VMs to access the internet and communicate with each other on your local network.
Creating the Worker Nodes
Repeat the above steps to create two more VMs, naming them "k8s-worker-1" and "k8s-worker-2". Allocate at least 2GB of RAM for each worker node. The hard disk size can be the same as the master node. Ensure you also configure the network settings for bridged networking.
Step 2: Installing Ubuntu on the Virtual Machines
Now, let's install Ubuntu Server on each of the virtual machines.
- Start the "k8s-master" VM. It should boot from the Ubuntu ISO.
- Follow the on-screen instructions to install Ubuntu. During the installation, you'll be prompted to configure various settings.
- Language and Keyboard: Choose your preferred language and keyboard layout.
- Network: Since you're using bridged networking, the VM should automatically obtain an IP address. Verify this during the installation.
- Storage: Use the entire virtual disk.
- Profile setup: Create a user account with a username and password. Remember these credentials, as you'll need them to log in.
- SSH setup: Crucially, install the OpenSSH server. This allows you to remotely access the VM via SSH, which is essential for managing the cluster.
- Repeat the installation process for "k8s-worker-1" and "k8s-worker-2". Use the same user account credentials for consistency.
Step 3: Configuring the Hostnames and Static IPs
To ensure stable communication within the cluster, assign static IP addresses and hostnames to each VM.
Setting Hostnames
- SSH into each VM. Use the username and password you created during the Ubuntu installation. You can find the IP address of each VM using the
ip addrcommand. - Edit the
/etc/hostnamefile:- On
k8s-master, set the hostname tok8s-master. - On
k8s-worker-1, set the hostname tok8s-worker-1. - On
k8s-worker-2, set the hostname tok8s-worker-2. Use a text editor likenanoorvim:
sudo nano /etc/hostname - On
- Edit the
/etc/hostsfile: Add the following lines to map the hostnames to IP addresses. Replace the IP addresses with the actual IP addresses of your VMs.
Add these lines (replace with your actual IP addresses):sudo nano /etc/hosts192.168.1.10 k8s-master 192.168.1.11 k8s-worker-1 192.168.1.12 k8s-worker-2 - Reboot each VM for the changes to take effect:
sudo reboot
Setting Static IPs
Setting static IPs can be done through Netplan, Ubuntu's network configuration tool. Hereâs how you can configure a static IP:
-
Identify the Network Interface: Find the name of your network interface using
ip link. It's usually something likeenp0s3. -
Edit the Netplan Configuration File: Netplan configuration files are usually located in
/etc/netplan/. The file name might vary (e.g.,01-network-manager-all.yamlor50-cloud-init.yaml).sudo nano /etc/netplan/your_netplan_file.yamlReplace
your_netplan_file.yamlwith the actual file name. -
Modify the Configuration: Add or modify the following configuration, replacing the placeholders with your actual values:
network: version: 2 renderer: networkd ethernets: enp0s3: # Replace with your actual interface name dhcp4: no addresses: [192.168.1.10/24] # Replace with your desired IP address and subnet mask gateway4: 192.168.1.1 # Replace with your gateway IP address nameservers: addresses: [8.8.8.8, 8.8.4.4] # Replace with your DNS server addressesenp0s3: Replace this with your actual network interface name.192.168.1.10/24: Replace this with the desired static IP address for your master node and the appropriate subnet mask.192.168.1.1: Replace this with your network's gateway IP address.8.8.8.8and8.8.4.4: These are Google's public DNS servers. You can use your preferred DNS servers.
Important: Make sure the indentation is correct. YAML is sensitive to indentation. Use spaces, not tabs.
-
Apply the Netplan Configuration:
sudo netplan applyIf you encounter any errors, run
sudo netplan tryto troubleshoot. -
Repeat the process for
k8s-worker-1andk8s-worker-2, assigning them different static IP addresses (e.g.,192.168.1.11and192.168.1.12).
Step 4: Installing Docker
Kubernetes uses Docker to run containerized applications. Let's install Docker on all three VMs.
-
SSH into each VM.
-
Update the package index:
sudo apt update -
Install required packages:
sudo apt install apt-transport-https ca-certificates curl gnupg lsb-release -y -
Add Docker's official GPG key:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg -
Set up the stable Docker repository:
echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null -
Update the package index again:
sudo apt update -
Install Docker Engine:
sudo apt install docker-ce docker-ce-cli containerd.io -y -
Verify Docker installation:
sudo docker run hello-worldThis command downloads a test image and runs it in a container. If Docker is installed correctly, you should see a message indicating that the container ran successfully.
-
Add your user to the docker group: This allows you to run Docker commands without using
sudo.sudo usermod -aG docker $USER newgrp dockerYou might need to log out and log back in for this change to take effect.
Step 5: Installing Kubernetes Components
Now, let's install the Kubernetes components: kubeadm, kubelet, and kubectl on all three VMs.
-
SSH into each VM.
-
Update the package index:
sudo apt update -
Install required packages:
sudo apt install apt-transport-https ca-certificates curl -y -
Download the Google Cloud public signing key:
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg -
Add the Kubernetes apt repository:
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list -
Update the package index again:
sudo apt update -
Install kubeadm, kubelet, and kubectl:
sudo apt install kubeadm kubelet kubectl -y sudo apt-mark hold kubeadm kubelet kubectlThe
apt-mark holdcommand prevents these packages from being accidentally updated, which could cause compatibility issues.
Step 6: Initializing the Kubernetes Cluster
Now itâs time to initialize the Kubernetes cluster on the master node.
-
SSH into the
k8s-masterVM. -
Initialize the Kubernetes cluster:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16--pod-network-cidr: This specifies the IP address range for the pod network.10.244.0.0/16is a common choice when using Calico as the network plugin.
Important: The output of this command will include a
kubeadm joincommand. Copy this command and save it somewhere safe. You'll need it to join the worker nodes to the cluster. -
Configure
kubectl:mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configThese commands configure
kubectlto connect to the Kubernetes cluster.
Step 7: Deploying a Pod Network
A pod network allows pods to communicate with each other. We'll use Calico, a popular and robust networking solution.
-
SSH into the
k8s-masterVM. -
Apply the Calico manifest:
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yamlThis command downloads and applies the Calico manifest, deploying Calico to your cluster.
-
Verify that the pods are running:
kubectl get pods --all-namespacesWait until all Calico pods are in the
Runningstate. This may take a few minutes.
Step 8: Joining the Worker Nodes
Now, let's join the worker nodes to the Kubernetes cluster.
-
SSH into each worker node (
k8s-worker-1andk8s-worker-2). -
Run the
kubeadm joincommand that you saved earlier. It should look something like this:sudo kubeadm join <master-ip>:<port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>Replace the placeholders with the actual values from the output of
kubeadm initon the master node.
Step 9: Verifying the Cluster
Finally, let's verify that the Kubernetes cluster is up and running correctly.
-
SSH into the
k8s-masterVM. -
Check the node status:
kubectl get nodesYou should see all three nodes (
k8s-master,k8s-worker-1, andk8s-worker-2) listed, and their status should beReady.
Step 10: Deploying a Sample Application (Optional)
To test your cluster, you can deploy a simple application, such as the Kubernetes Nginx example.
-
Create a deployment:
kubectl create deployment nginx --image=nginx -
Expose the deployment as a service:
kubectl expose deployment nginx --port=80 --type=NodePort -
Get the service details:
kubectl get service nginxLook for the
NodePort. It will be a port number between 30000 and 32767. -
Access the application: Open a web browser and navigate to
http://<worker-node-ip>:<nodeport>. Replace<worker-node-ip>with the IP address of one of your worker nodes and<nodeport>with the NodePort you obtained in the previous step. You should see the default Nginx welcome page.
Conclusion
Congratulations! You've successfully set up a Kubernetes cluster on Ubuntu using VirtualBox. You can now start experimenting with Kubernetes deployments, services, and other features. This setup provides a solid foundation for learning and exploring the world of container orchestration. Remember to consult the official Kubernetes documentation for more in-depth information and advanced configuration options. Keep exploring and happy Kubernetes-ing, guys! This guide provides a robust and detailed approach to setting up your cluster, enabling you to confidently explore and master Kubernetes concepts.