Changes between Initial Version and Version 1 of minikubedeployment2023


Ignore:
Timestamp:
Nov 13, 2023, 6:55:24 AM (17 months ago)
Author:
deepthi
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • minikubedeployment2023

    v1 v1  
     1You may continue to install Kubernetes on the vm you already installed docker. If you are installing this on a different machine make sure docker is already installed.
     2Part 1
     3Installing kubeadm, kubelet, and kubectl:
     41. Update the apt package index:
     5sudo apt-get update
     62. Install packages needed to use the Kubernetes apt repository:
     7sudo apt-get install -y apt-transport-https ca-certificates curl vim git
     83. Download the public signing key for the Kubernetes package repositories:
     9curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
     104. Add the Kubernetes apt repository:
     11echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
     12Update the apt package index again:
     13sudo apt-get update
     145. Install kubelet, kubeadm, and kubectl:
     15sudo apt-get install -y kubelet kubeadm kubectl
     166. Pin installed versions of kubelet, kubeadm, and kubectl to prevent them from being accidentally updated:
     17sudo apt-mark hold kubelet kubeadm kubectl
     187. Check installed versions:
     19kubectl version --client
     20
     21kubeadm version
     22 
     23Disable Swap Space
     248. Disable all swaps from /proc/swaps.
     25sudo swapoff -a
     26
     27sudo sed -i.bak -r 's/(.+ swap .+)/#\1/' /etc/fstab
     289. Check if swap has been disabled by running the free command.
     29free -h
     30 
     31Install Container runtime
     3210. Configure persistent loading of modules
     33sudo tee /etc/modules-load.d/k8s.conf <<EOF
     34overlay
     35br_netfilter
     36EOF
     3711. Load at runtime
     38sudo modprobe overlay
     39
     40sudo modprobe br_netfilter
     4112. Ensure sysctl params are set
     42sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
     43
     44net.bridge.bridge-nf-call-ip6tables = 1
     45
     46net.bridge.bridge-nf-call-iptables = 1
     47
     48net.ipv4.ip_forward = 1
     49
     50EOF
     5113. Reload configs
     52sudo sysctl --system
     5314. Install required packages
     54sudo apt install -y containerd.io
     5515. Configure containerd and start service
     56sudo mkdir -p /etc/containerd
     57
     58sudo containerd config default | sudo tee /etc/containerd/config.toml
     5916. Configuring a cgroup driver
     60Both the container runtime and the kubelet have a property called "cgroup driver", which is essential for the management of cgroups on Linux machines.
     61sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml
     6217. Restart containerd
     63sudo systemctl restart containerd
     64
     65sudo systemctl enable containerd
     66
     67systemctl status containerd
     68
     69Initialize control plane
     7018. Make sure that the br_netfilter module is loaded:
     71lsmod | grep br_netfilter
     72Output should similar to:
     73br_netfilter           22256  0
     74bridge                151336  1 br_netfilter
     7519. Enable kubelet service.
     76sudo systemctl enable kubelet
     7720. Pull container images (it will take some time):
     78sudo kubeadm config images pull --cri-socket /run/containerd/containerd.sock
     7921. Bootstrap the endpoint. Here we use 10.244.0.0/16 as the pod network:
     80sudo kubeadm init   --pod-network-cidr=10.244.0.0/16   --cri-socket /run/containerd/containerd.sock
     81You will see Your Kubernetes control-plane has initialized successfully!
     8222. To start the cluster, you need to run the following as a regular user (For this scenario we will only use a single host):
     83  mkdir -p $HOME/.kube
     84  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
     85  sudo chown $(id -u):$(id -g) $HOME/.kube/config
     8623. Check cluster info:
     87kubectl cluster-info
     8824. Install a simple network plugin.
     89wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
     90kubectl apply -f kube-flannel.yml
     9125. Check the plugin is working
     92kubectl get pods -n kube-flannel
     9326. Confirm master node is ready: (If you see the status as Notready, give it a around 10mins)
     94kubectl get nodes -o wide
     9527. On a master/ control node to query the nodes you can use:
     96kubectl get nodes
     97Scheduling Pods on Kubernetes Master Node
     9828. By default, Kubernetes Cluster will not schedule pods on the master/control-plane node for security reasons. It is recommended you keep it this way, but for test environments you need to schedule Pods on control-plane node to maximize resource usage.
     99kubectl taint nodes --all  node-role.kubernetes.io/control-plane-
     100Part 2
     101Create a file simple-pod.yaml
     102apiVersion: v1
     103kind: Pod
     104metadata:
     105  name: nginx
     106spec:
     107  containers:
     108  - name: nginx
     109    image: nginx:1.14.2
     110    ports:
     111    - containerPort: 80
     112To create the Pod shown above, run the following command:
     113kubectl apply -f simple-pod.yaml
     114Pods are generally not created directly and are created using workload resources. See Working with Pods
     115Links to an external site. for more information on how Pods are used with workload resources
     116Part 3
     117Deploying a Simple Web Application on Kubernetes
     1181. Create a Deployment Manifest:
     119A Deployment ensures that a specified number of pod replicas are running at any given time. Let's create a simple Deployment for a web application using the nginx image.
     120Save the following YAML to a file named webapp-deployment.yaml:
     121apiVersion: apps/v1
     122kind: Deployment
     123metadata:
     124  name: webapp-deployment
     125  labels:
     126    app: webapp
     127spec:
     128  replicas: 2
     129  selector:
     130    matchLabels:
     131      app: webapp
     132  template:
     133    metadata:
     134      labels:
     135        app: webapp
     136    spec:
     137      containers:
     138      - name: nginx
     139        image: nginx:latest
     140        ports:
     141        - containerPort: 80
     1422. Create a Service Manifest:
     143A Service is an abstraction that defines a logical set of Pods and enables external traffic exposure, load balancing, and service discovery. For our web application, we'll use a NodePort service.
     144Save the following YAML to a file named webapp-service.yaml:
     145apiVersion: v1
     146kind: Service
     147metadata:
     148  name: webapp-service
     149spec:
     150  selector:
     151    app: webapp
     152  ports:
     153    - protocol: TCP
     154      port: 80
     155      targetPort: 80
     156      nodePort: 30080
     157  type: NodePort
     1583. Deploy the Application:
     159Apply the Deployment and Service manifests:
     160kubectl apply -f webapp-deployment.yaml
     161kubectl apply -f webapp-service.yaml
     1624. Verify the Deployment:
     163Check the status of the Deployment and Service:
     164kubectl get deployments
     165kubectl get services
     166You should see your webapp-deployment with 2 replicas. Give it a time to take both replicas online.
     1675. Access the Web Application:
     168Since we used a NodePort service, the web application should be accessible on node's IP at port 30080.
     169If you're unsure of your node IPs, you can get them with:
     170kubectl get nodes -o wide
     171Then, in a web browser (though ssh tunnel if you are in UiS cloud) or using a tool like curl, access the web application:
     172curl http://<NODE_IP>:30080
     173You should see the default nginx welcome page, indicating that your web application is running.
     174 
     175Part 4
     176Deploying WordPress and MySQL on Kubernetes
     177
     178Installing dependancies:
     179Download rancher.io/local-path storage class:
     180kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
     181Check with kubectl get storageclass
     182Make this storage class (local-path) the default:
     183kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
     184
     1851. Create a PersistentVolumeClaim for MySQL:
     186
     187MySQL needs persistent storage to store its data. Save the following YAML to a file named mysql-pvc.yaml:
     188apiVersion: v1
     189kind: PersistentVolumeClaim
     190metadata:
     191  name: mysql-pvc
     192spec:
     193  accessModes:
     194    - ReadWriteOnce
     195  resources:
     196    requests:
     197      storage: 1Gi
     198Apply the PVC:
     199kubectl apply -f mysql-pvc.yaml
     2002. Deploy MySQL:
     201Save the following YAML to a file named mysql-deployment.yaml:
     202apiVersion: apps/v1
     203kind: Deployment
     204metadata:
     205  name: mysql
     206spec:
     207  replicas: 1
     208  selector:
     209    matchLabels:
     210      app: mysql
     211  template:
     212    metadata:
     213      labels:
     214        app: mysql
     215    spec:
     216      containers:
     217      - name: mysql
     218        image: mysql:5.7
     219        env:
     220        - name: MYSQL_ROOT_PASSWORD
     221          value: "password"
     222        - name: MYSQL_DATABASE
     223          value: "wordpress"
     224        ports:
     225        - containerPort: 3306
     226        volumeMounts:
     227        - name: mysql-persistent-storage
     228          mountPath: /var/lib/mysql
     229      volumes:
     230      - name: mysql-persistent-storage
     231        persistentVolumeClaim:
     232          claimName: mysql-pvc
     233Apply the Deployment:
     234kubectl apply -f mysql-deployment.yaml
     235
     2363. Create a Service for MySQL:
     237
     238This will allow WordPress to communicate with MySQL. Save the following YAML to a file named mysql-service.yaml:
     239apiVersion: v1
     240kind: Service
     241metadata:
     242  name: mysql
     243spec:
     244  selector:
     245    app: mysql
     246  ports:
     247    - protocol: TCP
     248      port: 3306
     249      targetPort: 3306
     250
     251Apply the Service:
     252kubectl apply -f mysql-service.yaml
     253
     2544. Deploy WordPress:
     255Save the following YAML to a file named wordpress-deployment.yaml:
     256apiVersion: apps/v1
     257kind: Deployment
     258metadata:
     259  name: wordpress
     260spec:
     261  replicas: 1
     262  selector:
     263    matchLabels:
     264      app: wordpress
     265  template:
     266    metadata:
     267      labels:
     268        app: wordpress
     269    spec:
     270      containers:
     271      - name: wordpress
     272        image: wordpress:latest
     273        env:
     274        - name: WORDPRESS_DB_HOST
     275          value: mysql
     276        - name: WORDPRESS_DB_USER
     277          value: "root"
     278        - name: WORDPRESS_DB_PASSWORD
     279          value: "password"
     280        ports:
     281        - containerPort: 80
     282
     283Apply the Deployment:
     284kubectl apply -f wordpress-deployment.yaml
     285
     2865. Create a Service for WordPress:
     287
     288This will expose WordPress to external traffic. Save the following YAML to a file named wordpress-service.yaml:
     289apiVersion: v1
     290kind: Service
     291metadata:
     292  name: wordpress
     293spec:
     294  selector:
     295    app: wordpress
     296  ports:
     297    - protocol: TCP
     298      port: 80
     299      targetPort: 80
     300  type: NodePort
     301
     302Apply the Service:
     303kubectl apply -f wordpress-service.yaml
     304
     3056. Access WordPress:
     306
     307Since we used a NodePort service, WordPress should be accessible on node's IP at a dynamically allocated port above 30000.
     308To find the NodePort assigned to WordPress:
     309
     310kubectl get svc wordpress
     311
     312Then, in a web browser with the ssh tunnel, access WordPress:
     313
     314http://< INTERNAL-IP>:<NODE_PORT>
     315 
     316Part 5
     317Convert your Docker deployment into a Kubernetes deployment, you may compose your own service, deployment manifests as needed. Use the docker images you used previously when creating the pods/deployments.
     318 
     319Additional ref: https://kubebyexample.com/