Changes between Version 1 and Version 2 of minikubedeployment2023
- Timestamp:
- Nov 15, 2023, 6:29:43 AM (17 months ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
minikubedeployment2023
v1 v2 1 1 You may continue to install Kubernetes on the vm you already installed docker. If you are installing this on a different machine make sure docker is already installed. 2 Part 1 2 3 ==== Part 1 ==== 4 3 5 Installing kubeadm, kubelet, and kubectl: 6 4 7 1. Update the apt package index: 5 sudo apt-get update 8 9 `sudo apt-get update` 10 6 11 2. Install packages needed to use the Kubernetes apt repository: 7 sudo apt-get install -y apt-transport-https ca-certificates curl vim git 12 13 `sudo apt-get install -y apt-transport-https ca-certificates curl vim git` 14 8 15 3. Download the public signing key for the Kubernetes package repositories: 9 curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg 16 17 `curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg` 18 10 19 4. Add the Kubernetes apt repository: 11 echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list 20 21 `echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list` 22 12 23 Update the apt package index again: 13 sudo apt-get update 14 5. Install kubelet, kubeadm, and kubectl: 15 sudo apt-get install -y kubelet kubeadm kubectl 24 25 `sudo apt-get update` 26 27 5. Install kubelet, kubeadm, and kubectl:` 28 29 `sudo apt-get install -y kubelet kubeadm kubectl` 30 16 31 6. Pin installed versions of kubelet, kubeadm, and kubectl to prevent them from being accidentally updated: 17 sudo apt-mark hold kubelet kubeadm kubectl 32 33 `sudo apt-mark hold kubelet kubeadm kubectl` 34 18 35 7. Check installed versions: 19 kubectl version --client 20 21 kubeadm version 36 37 `kubectl version --client` 38 39 `kubeadm version` 22 40 23 41 Disable Swap Space 42 24 43 8. Disable all swaps from /proc/swaps. 25 sudo swapoff -a 26 27 sudo sed -i.bak -r 's/(.+ swap .+)/#\1/' /etc/fstab 44 45 `sudo swapoff -a` 46 47 `sudo sed -i.bak -r 's/(.+ swap .+)/#\1/' /etc/fstab` 48 28 49 9. Check if swap has been disabled by running the free command. 29 free -h 50 51 `free -h` 30 52 31 53 Install Container runtime 32 54 10. Configure persistent loading of modules 55 56 33 57 sudo tee /etc/modules-load.d/k8s.conf <<EOF 34 58 overlay 35 59 br_netfilter 36 60 EOF 61 37 62 11. Load at runtime 38 sudo modprobe overlay 39 40 sudo modprobe br_netfilter 63 64 `sudo modprobe overlay ` 65 66 `sudo modprobe br_netfilter` 67 41 68 12. Ensure sysctl params are set 69 42 70 sudo tee /etc/sysctl.d/kubernetes.conf<<EOF 43 71 … … 49 77 50 78 EOF 79 51 80 13. Reload configs 52 sudo sysctl --system 81 `sudo sysctl --system ` 82 53 83 14. Install required packages 54 sudo apt install -y containerd.io 84 85 `sudo apt install -y containerd.io ` 86 55 87 15. Configure containerd and start service 56 sudo mkdir -p /etc/containerd 57 58 sudo containerd config default | sudo tee /etc/containerd/config.toml 88 89 `sudo mkdir -p /etc/containerd` 90 91 `sudo containerd config default | sudo tee /etc/containerd/config.toml` 92 59 93 16. Configuring a cgroup driver 60 94 Both the container runtime and the kubelet have a property called "cgroup driver", which is essential for the management of cgroups on Linux machines. 61 sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml 95 96 `sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml` 97 62 98 17. Restart containerd 63 sudo systemctl restart containerd 64 65 sudo systemctl enable containerd 66 67 systemctl status containerd 99 100 `sudo systemctl restart containerd` 101 102 `sudo systemctl enable containerd` 103 104 `systemctl status containerd` 68 105 69 106 Initialize control plane 70 107 18. Make sure that the br_netfilter module is loaded: 71 lsmod | grep br_netfilter 108 109 `lsmod | grep br_netfilter` 110 72 111 Output should similar to: 73 112 br_netfilter 22256 0 74 113 bridge 151336 1 br_netfilter 114 75 115 19. Enable kubelet service. 76 sudo systemctl enable kubelet 116 117 `sudo systemctl enable kubelet` 118 77 119 20. Pull container images (it will take some time): 78 sudo kubeadm config images pull --cri-socket /run/containerd/containerd.sock 120 121 `sudo kubeadm config images pull --cri-socket /run/containerd/containerd.sock` 122 79 123 21. Bootstrap the endpoint. Here we use 10.244.0.0/16 as the pod network: 80 sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --cri-socket /run/containerd/containerd.sock 124 125 `sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --cri-socket /run/containerd/containerd.sock` 81 126 You will see Your Kubernetes control-plane has initialized successfully! 127 82 128 22. To start the cluster, you need to run the following as a regular user (For this scenario we will only use a single host): 83 mkdir -p $HOME/.kube 84 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 85 sudo chown $(id -u):$(id -g) $HOME/.kube/config 129 130 ` mkdir -p $HOME/.kube` 131 ` sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config` 132 ` sudo chown $(id -u):$(id -g) $HOME/.kube/config` 133 86 134 23. Check cluster info: 87 kubectl cluster-info 135 136 `kubectl cluster-info` 137 88 138 24. Install a simple network plugin. 89 wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml 90 kubectl apply -f kube-flannel.yml 139 140 `wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml` 141 `kubectl apply -f kube-flannel.yml` 142 91 143 25. Check the plugin is working 92 kubectl get pods -n kube-flannel 144 145 `kubectl get pods -n kube-flannel` 146 93 147 26. Confirm master node is ready: (If you see the status as Notready, give it a around 10mins) 94 kubectl get nodes -o wide 148 149 `kubectl get nodes -o wide` 95 150 27. On a master/ control node to query the nodes you can use: 96 kubectl get nodes 151 152 `kubectl get nodes` 153 97 154 Scheduling Pods on Kubernetes Master Node 98 28. By default, Kubernetes Cluster will not schedule pods on the master/control-plane node for security reasons. It is recommended you keep it this way, but for test environments you need to schedule Pods on control-plane node to maximize resource usage. 99 kubectl taint nodes --all node-role.kubernetes.io/control-plane- 100 Part 2 155 28. By default, Kubernetes Cluster will not schedule pods on the master/control-plane node for security reasons. It is recommended you keep it this way, but for test environments you need to schedule Pods on 156 control-plane node to maximize resource usage. 157 158 `kubectl taint nodes --all node-role.kubernetes.io/control-plane-` 159 160 ==== Part 2 ==== 101 161 Create a file simple-pod.yaml 162 102 163 apiVersion: v1 103 164 kind: Pod … … 110 171 ports: 111 172 - containerPort: 80 173 112 174 To create the Pod shown above, run the following command: 113 kubectl apply -f simple-pod.yaml 175 176 `kubectl apply -f simple-pod.yaml` 177 114 178 Pods are generally not created directly and are created using workload resources. See Working with Pods 115 179 Links to an external site. for more information on how Pods are used with workload resources 116 Part 3 180 181 ==== Part 3 ==== 182 117 183 Deploying a Simple Web Application on Kubernetes 118 184 1. Create a Deployment Manifest: 119 185 A Deployment ensures that a specified number of pod replicas are running at any given time. Let's create a simple Deployment for a web application using the nginx image. 120 186 Save the following YAML to a file named webapp-deployment.yaml: 187 121 188 apiVersion: apps/v1 122 189 kind: Deployment … … 140 207 ports: 141 208 - containerPort: 80 209 142 210 2. Create a Service Manifest: 143 211 A Service is an abstraction that defines a logical set of Pods and enables external traffic exposure, load balancing, and service discovery. For our web application, we'll use a NodePort service. 144 212 Save the following YAML to a file named webapp-service.yaml: 213 145 214 apiVersion: v1 146 215 kind: Service … … 156 225 nodePort: 30080 157 226 type: NodePort 227 158 228 3. Deploy the Application: 159 229 Apply the Deployment and Service manifests: 160 kubectl apply -f webapp-deployment.yaml 161 kubectl apply -f webapp-service.yaml 230 231 `kubectl apply -f webapp-deployment.yaml` 232 `kubectl apply -f webapp-service.yaml` 233 162 234 4. Verify the Deployment: 163 235 Check the status of the Deployment and Service: 164 kubectl get deployments 165 kubectl get services 236 237 `kubectl get deployments` 238 `kubectl get services` 239 166 240 You should see your webapp-deployment with 2 replicas. Give it a time to take both replicas online. 241 167 242 5. Access the Web Application: 243 168 244 Since we used a NodePort service, the web application should be accessible on node's IP at port 30080. 169 245 If you're unsure of your node IPs, you can get them with: 170 kubectl get nodes -o wide 246 247 `kubectl get nodes -o wide` 248 171 249 Then, in a web browser (though ssh tunnel if you are in UiS cloud) or using a tool like curl, access the web application: 172 curl http://<NODE_IP>:30080 250 251 `curl http://<NODE_IP>:30080` 252 173 253 You should see the default nginx welcome page, indicating that your web application is running. 174 254 … … 178 258 Installing dependancies: 179 259 Download rancher.io/local-path storage class: 180 kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml 260 261 `kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml` 262 181 263 Check with kubectl get storageclass 182 264 Make this storage class (local-path) the default: 183 kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' 265 266 `kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'` 184 267 185 268 1. Create a PersistentVolumeClaim for MySQL: … … 196 279 requests: 197 280 storage: 1Gi 281 198 282 Apply the PVC: 199 kubectl apply -f mysql-pvc.yaml 283 284 `kubectl apply -f mysql-pvc.yaml` 285 200 286 2. Deploy MySQL: 201 287 Save the following YAML to a file named mysql-deployment.yaml: 288 202 289 apiVersion: apps/v1 203 290 kind: Deployment … … 231 318 persistentVolumeClaim: 232 319 claimName: mysql-pvc 320 233 321 Apply the Deployment: 234 kubectl apply -f mysql-deployment.yaml 322 323 `kubectl apply -f mysql-deployment.yaml` 235 324 236 325 3. Create a Service for MySQL: 237 326 238 327 This will allow WordPress to communicate with MySQL. Save the following YAML to a file named mysql-service.yaml: 328 239 329 apiVersion: v1 240 330 kind: Service … … 250 340 251 341 Apply the Service: 252 kubectl apply -f mysql-service.yaml 342 `kubectl apply -f mysql-service.yaml` 253 343 254 344 4. Deploy WordPress: 255 345 Save the following YAML to a file named wordpress-deployment.yaml: 346 256 347 apiVersion: apps/v1 257 348 kind: Deployment … … 282 373 283 374 Apply the Deployment: 284 kubectl apply -f wordpress-deployment.yaml 375 `kubectl apply -f wordpress-deployment.yaml` 285 376 286 377 5. Create a Service for WordPress: 287 378 288 379 This will expose WordPress to external traffic. Save the following YAML to a file named wordpress-service.yaml: 380 289 381 apiVersion: v1 290 382 kind: Service … … 301 393 302 394 Apply the Service: 303 kubectl apply -f wordpress-service.yaml 395 396 `kubectl apply -f wordpress-service.yaml` 304 397 305 398 6. Access WordPress: … … 308 401 To find the NodePort assigned to WordPress: 309 402 310 kubectl get svc wordpress 403 `kubectl get svc wordpress` 311 404 312 405 Then, in a web browser with the ssh tunnel, access WordPress: 313 406 314 http://< INTERNAL-IP>:<NODE_PORT> 407 `http://< INTERNAL-IP>:<NODE_PORT>` 315 408 316 409 Part 5