Changes between Version 1 and Version 2 of minikubedeployment2023


Ignore:
Timestamp:
Nov 15, 2023, 6:29:43 AM (17 months ago)
Author:
deepthi
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • minikubedeployment2023

    v1 v2  
    11You may continue to install Kubernetes on the vm you already installed docker. If you are installing this on a different machine make sure docker is already installed.
    2 Part 1
     2
     3==== Part 1 ====
     4
    35Installing kubeadm, kubelet, and kubectl:
     6
    471. Update the apt package index:
    5 sudo apt-get update
     8
     9`sudo apt-get update`
     10
    6112. Install packages needed to use the Kubernetes apt repository:
    7 sudo apt-get install -y apt-transport-https ca-certificates curl vim git
     12
     13`sudo apt-get install -y apt-transport-https ca-certificates curl vim git`
     14
    8153. Download the public signing key for the Kubernetes package repositories:
    9 curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
     16
     17`curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg`
     18
    10194. Add the Kubernetes apt repository:
    11 echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
     20
     21`echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list`
     22
    1223Update the apt package index again:
    13 sudo apt-get update
    14 5. Install kubelet, kubeadm, and kubectl:
    15 sudo apt-get install -y kubelet kubeadm kubectl
     24
     25`sudo apt-get update`
     26
     275. Install kubelet, kubeadm, and kubectl:`
     28
     29`sudo apt-get install -y kubelet kubeadm kubectl`
     30
    16316. Pin installed versions of kubelet, kubeadm, and kubectl to prevent them from being accidentally updated:
    17 sudo apt-mark hold kubelet kubeadm kubectl
     32
     33`sudo apt-mark hold kubelet kubeadm kubectl`
     34
    18357. Check installed versions:
    19 kubectl version --client
    20 
    21 kubeadm version
     36
     37`kubectl version --client`
     38
     39`kubeadm version`
    2240 
    2341Disable Swap Space
     42
    24438. Disable all swaps from /proc/swaps.
    25 sudo swapoff -a
    26 
    27 sudo sed -i.bak -r 's/(.+ swap .+)/#\1/' /etc/fstab
     44
     45`sudo swapoff -a`
     46
     47`sudo sed -i.bak -r 's/(.+ swap .+)/#\1/' /etc/fstab`
     48
    28499. Check if swap has been disabled by running the free command.
    29 free -h
     50
     51`free -h`
    3052 
    3153Install Container runtime
    325410. Configure persistent loading of modules
     55
     56
    3357sudo tee /etc/modules-load.d/k8s.conf <<EOF
    3458overlay
    3559br_netfilter
    3660EOF
     61
    376211. Load at runtime
    38 sudo modprobe overlay
    39 
    40 sudo modprobe br_netfilter
     63
     64`sudo modprobe overlay `
     65
     66`sudo modprobe br_netfilter`
     67
    416812. Ensure sysctl params are set
     69
    4270sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
    4371
     
    4977
    5078EOF
     79
    518013. Reload configs
    52 sudo sysctl --system
     81`sudo sysctl --system `
     82
    538314. Install required packages
    54 sudo apt install -y containerd.io
     84
     85`sudo apt install -y containerd.io `
     86
    558715. Configure containerd and start service
    56 sudo mkdir -p /etc/containerd
    57 
    58 sudo containerd config default | sudo tee /etc/containerd/config.toml
     88
     89`sudo mkdir -p /etc/containerd`
     90
     91`sudo containerd config default | sudo tee /etc/containerd/config.toml`
     92
    599316. Configuring a cgroup driver
    6094Both the container runtime and the kubelet have a property called "cgroup driver", which is essential for the management of cgroups on Linux machines.
    61 sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml
     95
     96`sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml`
     97
    629817. Restart containerd
    63 sudo systemctl restart containerd
    64 
    65 sudo systemctl enable containerd
    66 
    67 systemctl status containerd
     99
     100`sudo systemctl restart containerd`
     101
     102`sudo systemctl enable containerd`
     103
     104`systemctl status containerd`
    68105
    69106Initialize control plane
    7010718. Make sure that the br_netfilter module is loaded:
    71 lsmod | grep br_netfilter
     108
     109`lsmod | grep br_netfilter`
     110
    72111Output should similar to:
    73112br_netfilter           22256  0
    74113bridge                151336  1 br_netfilter
     114
    7511519. Enable kubelet service.
    76 sudo systemctl enable kubelet
     116
     117`sudo systemctl enable kubelet`
     118
    7711920. Pull container images (it will take some time):
    78 sudo kubeadm config images pull --cri-socket /run/containerd/containerd.sock
     120
     121`sudo kubeadm config images pull --cri-socket /run/containerd/containerd.sock`
     122
    7912321. Bootstrap the endpoint. Here we use 10.244.0.0/16 as the pod network:
    80 sudo kubeadm init   --pod-network-cidr=10.244.0.0/16   --cri-socket /run/containerd/containerd.sock
     124
     125`sudo kubeadm init   --pod-network-cidr=10.244.0.0/16   --cri-socket /run/containerd/containerd.sock`
    81126You will see Your Kubernetes control-plane has initialized successfully!
     127
    8212822. To start the cluster, you need to run the following as a regular user (For this scenario we will only use a single host):
    83   mkdir -p $HOME/.kube
    84   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    85   sudo chown $(id -u):$(id -g) $HOME/.kube/config
     129 
     130` mkdir -p $HOME/.kube`
     131`  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config`
     132`  sudo chown $(id -u):$(id -g) $HOME/.kube/config`
     133
    8613423. Check cluster info:
    87 kubectl cluster-info
     135
     136`kubectl cluster-info`
     137
    8813824. Install a simple network plugin.
    89 wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
    90 kubectl apply -f kube-flannel.yml
     139
     140`wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml`
     141`kubectl apply -f kube-flannel.yml`
     142
    9114325. Check the plugin is working
    92 kubectl get pods -n kube-flannel
     144
     145`kubectl get pods -n kube-flannel`
     146
    9314726. Confirm master node is ready: (If you see the status as Notready, give it a around 10mins)
    94 kubectl get nodes -o wide
     148
     149`kubectl get nodes -o wide`
    9515027. On a master/ control node to query the nodes you can use:
    96 kubectl get nodes
     151
     152`kubectl get nodes`
     153
    97154Scheduling Pods on Kubernetes Master Node
    98 28. By default, Kubernetes Cluster will not schedule pods on the master/control-plane node for security reasons. It is recommended you keep it this way, but for test environments you need to schedule Pods on control-plane node to maximize resource usage.
    99 kubectl taint nodes --all  node-role.kubernetes.io/control-plane-
    100 Part 2
     15528. By default, Kubernetes Cluster will not schedule pods on the master/control-plane node for security reasons. It is recommended you keep it this way, but for test environments you need to schedule Pods on
     156control-plane node to maximize resource usage.
     157
     158`kubectl taint nodes --all  node-role.kubernetes.io/control-plane-`
     159
     160==== Part 2 ====
    101161Create a file simple-pod.yaml
     162
    102163apiVersion: v1
    103164kind: Pod
     
    110171    ports:
    111172    - containerPort: 80
     173
    112174To create the Pod shown above, run the following command:
    113 kubectl apply -f simple-pod.yaml
     175
     176`kubectl apply -f simple-pod.yaml`
     177
    114178Pods are generally not created directly and are created using workload resources. See Working with Pods
    115179Links to an external site. for more information on how Pods are used with workload resources
    116 Part 3
     180
     181==== Part 3 ====
     182
    117183Deploying a Simple Web Application on Kubernetes
    1181841. Create a Deployment Manifest:
    119185A Deployment ensures that a specified number of pod replicas are running at any given time. Let's create a simple Deployment for a web application using the nginx image.
    120186Save the following YAML to a file named webapp-deployment.yaml:
     187
    121188apiVersion: apps/v1
    122189kind: Deployment
     
    140207        ports:
    141208        - containerPort: 80
     209
    1422102. Create a Service Manifest:
    143211A Service is an abstraction that defines a logical set of Pods and enables external traffic exposure, load balancing, and service discovery. For our web application, we'll use a NodePort service.
    144212Save the following YAML to a file named webapp-service.yaml:
     213
    145214apiVersion: v1
    146215kind: Service
     
    156225      nodePort: 30080
    157226  type: NodePort
     227
    1582283. Deploy the Application:
    159229Apply the Deployment and Service manifests:
    160 kubectl apply -f webapp-deployment.yaml
    161 kubectl apply -f webapp-service.yaml
     230
     231`kubectl apply -f webapp-deployment.yaml`
     232`kubectl apply -f webapp-service.yaml`
     233
    1622344. Verify the Deployment:
    163235Check the status of the Deployment and Service:
    164 kubectl get deployments
    165 kubectl get services
     236
     237`kubectl get deployments`
     238`kubectl get services`
     239
    166240You should see your webapp-deployment with 2 replicas. Give it a time to take both replicas online.
     241
    1672425. Access the Web Application:
     243
    168244Since we used a NodePort service, the web application should be accessible on node's IP at port 30080.
    169245If you're unsure of your node IPs, you can get them with:
    170 kubectl get nodes -o wide
     246
     247`kubectl get nodes -o wide`
     248
    171249Then, in a web browser (though ssh tunnel if you are in UiS cloud) or using a tool like curl, access the web application:
    172 curl http://<NODE_IP>:30080
     250
     251`curl http://<NODE_IP>:30080`
     252
    173253You should see the default nginx welcome page, indicating that your web application is running.
    174254 
     
    178258Installing dependancies:
    179259Download rancher.io/local-path storage class:
    180 kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
     260
     261`kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml`
     262
    181263Check with kubectl get storageclass
    182264Make this storage class (local-path) the default:
    183 kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
     265
     266`kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'`
    184267
    1852681. Create a PersistentVolumeClaim for MySQL:
     
    196279    requests:
    197280      storage: 1Gi
     281
    198282Apply the PVC:
    199 kubectl apply -f mysql-pvc.yaml
     283
     284`kubectl apply -f mysql-pvc.yaml`
     285
    2002862. Deploy MySQL:
    201287Save the following YAML to a file named mysql-deployment.yaml:
     288
    202289apiVersion: apps/v1
    203290kind: Deployment
     
    231318        persistentVolumeClaim:
    232319          claimName: mysql-pvc
     320
    233321Apply the Deployment:
    234 kubectl apply -f mysql-deployment.yaml
     322
     323`kubectl apply -f mysql-deployment.yaml`
    235324
    2363253. Create a Service for MySQL:
    237326
    238327This will allow WordPress to communicate with MySQL. Save the following YAML to a file named mysql-service.yaml:
     328
    239329apiVersion: v1
    240330kind: Service
     
    250340
    251341Apply the Service:
    252 kubectl apply -f mysql-service.yaml
     342`kubectl apply -f mysql-service.yaml`
    253343
    2543444. Deploy WordPress:
    255345Save the following YAML to a file named wordpress-deployment.yaml:
     346
    256347apiVersion: apps/v1
    257348kind: Deployment
     
    282373
    283374Apply the Deployment:
    284 kubectl apply -f wordpress-deployment.yaml
     375`kubectl apply -f wordpress-deployment.yaml`
    285376
    2863775. Create a Service for WordPress:
    287378
    288379This will expose WordPress to external traffic. Save the following YAML to a file named wordpress-service.yaml:
     380
    289381apiVersion: v1
    290382kind: Service
     
    301393
    302394Apply the Service:
    303 kubectl apply -f wordpress-service.yaml
     395
     396`kubectl apply -f wordpress-service.yaml`
    304397
    3053986. Access WordPress:
     
    308401To find the NodePort assigned to WordPress:
    309402
    310 kubectl get svc wordpress
     403`kubectl get svc wordpress`
    311404
    312405Then, in a web browser with the ssh tunnel, access WordPress:
    313406
    314 http://< INTERNAL-IP>:<NODE_PORT>
     407`http://< INTERNAL-IP>:<NODE_PORT>`
    315408 
    316409Part 5