Certified Kubernetes Administrator Exam Preparation
language: NN
WEBVTT തതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതത തതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതത തതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതത� തതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതത തതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതതത Okay process. Okay. Okay. Okay. Yeah. Yes Kind of application. But I really doubt about this amount aggregation on on our integration side because this is something that directly vertical can provide We are just passing the data while passing the data. We are capturing and showing the amount. I am not sure whether By amount second meant this are the quantity mentioned as an amount I already asked a question to him in the chat, but but let's see Okay
on 2022-10-26
language: EN
WEBVTT Hello, I'm back. Yep, welcome back. Welcome back. Welcome back. Thanks for those confirmations. All right What we are going to do now? We are going to continue our discussion We discussed about part we discussed about replica set and There is one guy that we are yet to discuss that is this top level resource called deployment Why I need to have this guy sitting at the top? Right, so replica set has a duty to perform the disease to DS and What this is doing? That's what we are going to discuss now So from the deployment resource perspective, we have one capability called rollout rollout feature Which means let's say even built an application the name of the application is even app and The the first version that he released is 1.0 and in the cluster he is running 100 he created a deployment with replica as an 100 100 100 parts are running with version 1.0 image and then he made some bug fixes and enhancements and He released a new version of the image 2.0 Okay, so here he wants to Replace all this 100 version 1.0 application With 2.0 right all the all these 100 parts need to be deleted and new 100 parts needs to be created That's going to run 2.0 So this is nothing but Rolling out new version of the application Just give me a second someone set my door. Let me quickly check Apologies for that Hey, I think a popa Adi Hey, you needy partner meeting little pop up. I'm gonna go to the negative. Hello. Yep. What happened? I'm back Hello, I'm a audible now. Hello. Yep. Was that a question? Okay so Yep, so he wants to roll out a new version of the application 2.0 so how to perform this so basically he want to roll out a new version of the application and What we are trying to target is how I can perform this with zero downtime that to during peak business hard So at the deployment level we can specify something called strategy There are two strategies that we can specify one is recreate another one is rolling update Recreate and then the rolling update if you specify strategy as an recreate Which means all the old version 1.0 will be deleted Once it is all deleted then new 100 version 2.0 parts will be created Which means the time it takes for to delete and then the time it takes for all the 100 points to be Upon running go through the liveness readiness proof and be ready and then only it will start to accept the traffic So which means obviously you are going to have some downtime if you are going to choose recreate strategy But if you choose rolling update if you choose rolling update as a strategy Then you can make use of two properties max unavailable max search. So let's say You are going to choose max unavailable as an 20 percentage. Let's say you are going to So we have one deployment you can actually view the ML that is stored in the kubernetes etcd database Which has all the defaults included? The ML that's actually used by kubernetes to create that resource right just in the get command Output it as an ML if an old ML that will print the ML with all the defaults and everything included that is stored in the etcd database So here this is the deployment specification where you can see the strategy default is included rolling update Can it is called on a bit? Rolling update max search max unavailable those informations are provided here Okay So this is number one where you can specify the strategy and number two feature that it provides is rollout kubectl rollout For one specific deployment, let's say for events app Maybe he did multiple Deploy multiple versions deployments in the past for this one single deployment He initially deployed version 0.7 and then 8.0.9 version 1.0 version 1.5 now he is deploying version 2.0 Right so for one single deployment He just updating the image version and then he is rolling out a new version so you can actually view all the history of Rollouts performed for the deployment. So this will history command will list all the past Rollouts it did you did for that specific deployment? Okay, and then If he is going to do an version 2.0. Replacement what he will do he will open the ML file he will update the image and Then he will save the file and then he will apply That's going to trigger the rollout process that is replacing the parts with the new image parts, right? So while rollout is in progress You can actually do view the status of the rollout status deploy even so how many are currently being replaced? How many world are there? How many new are there you can view the status and you can even pause the deployment rollout? While the rollout is in progress in the monitoring tool you are observing lot of alerts So you suspect maybe this may be a cost so you can immediately pause the rollout Which means maybe 50 percentage are replaced 50 percentage are still yet to be replaced It was still in the same state because you pause the rollout Maybe after the troubleshooting you found out that it's because of a different issue So you can resume the rollout keeps it in rollout resume deploy and so on. Okay You deployed version 2.0 and after an hour. There are a lot of user incidents that are creating and You found out that there is some serious bug with this version 2.0 Then you can undo the rollout which means all the 100 parts will be replaced with version 1.0 the previous version You can go back to the immediate previous version or you can go back to any version in the past using unto command Okay, so for one specific deployment you can view the history of all the rollers that you did in the past You can view the status of current rollout that's being in progress That's just in progress you can pause resume and you can view even undo to immediate previous version or to any version in the past Okay, so let me quickly show that commands right let's let's explore those rollout commands real quick and then We are all set for the topic
on 2022-10-26
language: EN
WEBVTT you are going to deploy two applications let's say the name of the deployment is web one and you are going to create some three parts and you're going to create one more web to create a deployment replica set maybe you can show you you can copy using the copy icon at the right corner of that snippet window in the document in the website you will see copy icon right if you copy that way yeah by the way won't see that yeah Shahid maybe I can help you here so you can open the browser here okay so I think it's time for our coffee break so you can do and working coffee break if you want or else we can continue this after our coffee break that okay for all how you feel this first off is it is it good are we good with the pace you are able to follow all the concepts any specific feedbacks great great thank you darling okay now that comments then we are all set to go for a coffee break please be back by 10 45 thank you for listening welcome back everyone can you please raise your hands in the teams if you're back to a desk raise hands in the teams if you are back to a desk thanks Colin maybe others are it to join okay there we go there we go good good thank you so before our break we discussed on service and ingress now we are going to continue with the next topic a simple topic let me start with a simple topic right mmm autoscaling under the auto scaling we are going to talk about three types are three kinds let's say one is HPA horizontal pod auto scaling VPA vertical pod auto scaling CA cluster auto scaling sort of these three this we already discussed if you still remember while defining and part specification we set resources limits and requests for both CPU and memory so arriving those values is an time-consuming benchmark activity that developers are administered and needs to do and then they keep that value up to date instead of you doing that if you configure VPF for your application this will take care of right sizing your application by observing its past metrics or monitoring by observing its performance it will automatically right size your application it can suggest the values or if you have a proper configuration in place you can also let VP update the values for your application for limits and requests for both CPU and memory which means it's it's about growing your application vertically by adding more resources to it let's say if your application is consuming more and more maybe it will adjust the limits and requests accordingly so on so right so that is VP a and remember that is applicable only in the cloud VP a cloud Kubernetes implementations example cube Google Cloud again the same definitions and some examples okay this is with VP HPA is something that that's most commonly used almost in every cluster application team will be using HPA so the idea here is you you created or let's say even even created one application even app and right now he has three replicas and his application is going to have varied traffic need throughout the day so which means from 6 to 8 in the morning his application will be quite busy and then maybe less traffic compared to a 6 to 8 window 10 to 2 heavy traffic so throughout the day it's going to resume varied number of amount of traffic so one way to cope up with this heavy load is he need to scale this deployment so what he even can do he can execute cube CTL scale deployment type in a pen replicas to 100 so that from three 100 parts will run and he can scale down to 50 you can set it to five execute the scale command manually if he wants to scale up or scale down scale up or scale down right so when he will make this decision he is going to look at some kind of dashboard let's say let's say in a dashboard he is viewing the CPU utilization of all of these resources and in the dashboard he sees that the average CPU utilization across all these so he did scale up and after a while it was idle so it did scale down so now this is the process that we are going to automate why we need an human here what we are going to do is we are going to let cube or net or we are going to executing this command will create an HPA horizontal so what this command means is the metrics that the auto scalar component going to observe is CPU percentage of those three parts if the average CPU utilization across the three parts exit 75 percentage then that's the time it will scale up it will spin up additional more parts who will do the scale auto scale so basically this is going to send the command to the deployment resource to scale up instead of you you are doing that earlier now this auto scalar HPA is observing the metric and accordingly it's going to send the scale instructions to the deployment let's say and after a while it was all idle so it may send an instruction to scale down so it can scale up to a maximum of 20 parts and to keep just only two that is the minimum number of this is number of instances minimum how it how much it can have maximum how much it can reach to while doing the auto scaling which means at any point in time we will have at least two okay so in this case we use CPU percent the beauty with auto scale even tries to manually scale the deployment which means you are trying to scale and HPS trying to control the scale so that's going to have some conflict so if you can't forget HPA better avoid doing scaling on your own or else the algorithm by HPA will mess up with the scale up scale operation that you did that will confuse that component and it will curse you like anything I'm just kidding so it's going to conflict so that will that will end up in some undesired behavior all right so what it all takes to do that yes you can do that imperatively or declaratively let's quickly see the commands for that let me pick one existing deployment keep sitting get deploy keep city get HPA in short horizontal pod auto scalar cubes cube CTL okay again time for my eye checkup okay no HP a phone cube CTL expose expose will create a service we are trying to create an HPA so it's going to be auto scale cubes it'll auto scale the deployment with the name my engine next and I find I fun mean equals to I fun F and max equals 20 based on the metric CPU percent let's say 70 percentage okay if you want you can do dry run dry and as a client and output as an hammer okay you can see the kind as an horizontal pod auto scalar max replicas min replicas scale target reference which means it refers the deployment with the name my engine next because you're creating for the deployment target CPU to listen business 70 right okay cubes it'll apply I find yeah my HPA table HPA my engine next created gives it here get HPA as you can see here for now it's unknown once it gets access to the metrics it will also show the current compared to the target and the replicas will also be automatically updated so for now it is fine so within some time maybe it will scale down to 20 so all of the scale up or scale down or not are not instantaneous operation we have cool-off period for scale up or scale down let's say a minute or seconds that is configurable but we have cool off period so if there is a spike or scale down it won't be immediately it will absorb for amount of time and then it will make the scaled on decision so nothing is immediate here okay so you can describe this HPA and there are some information around it and it also come once about some warning right invalid metric fail to get CPU metric as you can see here this HPA is created but I don't think it's doing its duty because you don't have access to the metrics CPU percentage metric so don't talk expected which means in the mini cube we need we it comes with an metric server but then it's it's I think it's in the disabled state we need to enable metric server first and then if you try it this work okay let's see if I can get some from documentation HPA mini you okay okay okay we can try this so what they're doing they are creating on deployment and for the deployment they are creating one service so deployment service created and for this deployment and HPA is created for CP percentage 50 min and max minik you add-ons list yeah there it is metrics server it is disabled so we need to first enable the metrics server mini cube I don't know the command but let me okay then you can enable the metric server and then give a quick try of the first set up the sample just see HP in action that will do okay so this is with HP I will give some time let me complete the CA it's just a small part and then I will give time for you to try the HPA the last auto scaling is CA cluster auto scaling so what it means again this is applicable in the cloud okay let's look into the Google's documentation what this is about cluster autoscaler automatically resize your cluster node pool based on the demand of your workloads fine that we know yep so while creating the cluster you are going to enable the auto scaling and minimum and maximum what you are going to provide that's all the configuration you need to do okay that is with cluster auto scaling all right so with this I'm going to give you a few minutes for you to try hands-on for HPA maybe you will VPN CA some cubanese documentation okay let's take some five to ten minutes please I already included a link for HPA in your etherpad you need to do one command let me add that here in your mini cube you need to first mini cube add-ons list and make sure if metrics are running if not you are going to enable that in a queue add-ons enable metrics server and then you can follow the steps that are provided in this document yep go ahead good I could see Darlington was able to get it Shahid Frank is good job last two minutes to complete this activity all right time up for this activity let's move to our next section so what we just completed is auto scaling we even tried on hands-on for horizontal part auto scaling the next section we are going to talk about is config map and secrets another simple easy section right so in your cubanets cluster you have multiple nodes and one year to one day and three and even has an application to run he will schedule and it's going to run in one of the node right it will get scheduled for assigned for one of the node and it will container will spin up in that node correct that is the flow so for this even will be submitting an image to the cluster so this image that even built he built from the source code whether he will have a image for dev dev image QA so two ways to inject the configuration as environment variables or you have the configuration files that you want to mount or load in an specific directory inside the container okay for both of this you need this you need this files and for sensitive configurations we will use secrets like passwords usernames okay both takes a similar syntax we will discuss that so the first step is we need to bring those configurations inside the cubanets into cubanets and then second is once the configurations are already into the cubanets which means if you still remember in the cubanets cluster you will have a database etcd database and when I say bringing those config into cubanets I need to get the environment variables and app configuration files blah blah by everything and store it here in the cubanets database second step is in this part specification I need to do some modification so that it can read from this stored configuration and use the values for environmental variables use the values for files okay so which means use configs or reference configs in your part specification that is the second step okay two-step process bringing those config into cubanets and then reference referencing are using those in your part specification okay let's see this in action so that it will be clear hopefully will be clear config map or in short you can do CM there is one cm that is using but we don't have any config map that we created for our application so to create a config map we are going to use cube CTL create config map I think new here that's like how you created any other resources the same way you need to give a name for your config map let's say my CM and you are creating this config map for your my engine as application so you may have a file you may have another file you may have another file so in one single config map you can have multiple files also all record by this application let's say and you may have an environment file from ENV file this is just from file from ENV file means you have an dot ENV file that has only environment variables key value key value key value something like this not dot ENV file and then you can also you may also have some kind of literals you have key value pair that you may possibly use for fitting environment variables let's say owner equals even that's it so the flags that you will use is from file from ENV file from literals you can have another literals as well you can have as many data as you have but there is some limitation we have we can have around only one day of data under one config map I don't think we will have an conflict that will be more than one MD but if you have you can have an another config map or you can in general if in your application you have a lot of configurations then we will create some kind of file container that will have only file call the configuration file and then we will run it along with our main contenders and site container okay but that's those are our rare cases but for your normal configuration having a config map will do okay so in this config map you have one two three four yeah so formatting issue with the data file as expected let me quickly fix this okay in this case it's a config map with the name my config created from a file a txt file and from a literal key extra param value extra value and another little another param another value okay to literal and then one five so in the directory we have that txt file so it will work without any issues what is that file seven one right yeah this one some dummy file that has couple of key value parent it okay so I'm going to execute this command to create the config map the first data the second data and then the third data key values that are there fine so step one is completed that is we successfully brought those configuration into the kubernetes this data is already stored in the etcd database step two is using these configurations in the pod specification okay so for that I am going to open the file seven to a simple pod specification that uses the config map so I mentioned there are two ways right one as a environment variable another as a file let's see the first part that is as an environment variable so usually we set it as an iPhone name the name of the environment variable let's say DB user and then value call it right but this value is going to differ for environments so basically this is the value that's going to come from the config map so instead of directly hot coding the value in the pod specification we are going to tell hey the value is going to come from a config map key reference which means the name of the config map my config this is the config we just created within that config what is the key go to this config map look for this key get its value and assign to this containers environment variable the same goes here go to this okay so this is how we are referring the config map here and then reading from the config map and then setting the environment variables this is the first part second part of the story is loading some files if you have a file content in this environment variables and through five this to a total and not related one so to set environment variables you don't need to create the volumes and that you can directly refer the config map and do it but if you want to load some files to a directory then you need to create the volumes then load it to the directory okay with this understanding let's go back and apply this ml5 7 to cot config dot able keeps it ill apply FNF 7 to cot config dot ml it should create at the pod if it's in the pod specification we are referring the config map correct in the pod specification we are referring the config map if in case if this config map doesn't exist if you try to apply this part then it will end up in an error saying container config error as you can see here similar to that it will show some other for you but in our case we already created the config map so it already picked the config map and it is working perfectly fine how you can verify you can get a shell into this and try to print the environment variable see whether the values for the environment variables are set based on the values that came from the config map and go to the config directory and see whether in the directory you have all the config map contents loaded there okay by that way you can see easily verify or as is the demo application you can also verify this from its UI this application is exposing those information in its UI port forward cot config 80 84 80 80 let's say open the browser go to 8084 so in this application all the environment variables are printed in this tab server okay you can also specify the sub path let's say from this config map you want to load only this file then in here you can you will have an option to specify the sub paths here we just gave name but you can do some more additional specifications here and you can specify the sub path so that a part of the right I hope it is clear let me continue with the secrets and then I will take you some time after the discussion so that you can try the secrets also right okay so secrets it takes the same format as config map the same way we are going to create it and if you want to set environment variables you are going to take similar format let's say assume those values are now going to come from so far we are we created multiple we deployed multiple applications in the kubernetes correct even if you take this application nginx application and this application all these images are hosted in an public repository so we are able to access it without requiring any authentication what if you are again some formatting from file from file so these two files already resides in your working files folder if you want you can include some from from literals and you can try it that will work anyway so let's create a secret let's see whether is there any secret exist in this namespace nothing let's create our first secret type as an opaque so if I try to describe this so in by default with the default installation the secrets will be it won't be in the plain text but it will now apart needs to use the secret right I want to load this to certificate and key inside and there a train in the application so far that I am going to use the file or secret by three yep this one so here we are not setting any environment variable so we are using a traditional way to load a file through volumes as I mentioned earlier secret is also one of the supported volume type secret name of the secret and load this volume to the TLS directory within the container as simple as that ok let me apply this him and file file number 53 number if it is referring a config map or secret those entities must exist before creating the port or else your port creation will fail you see they'll get parts and it is already running right I'm going to test it by doing keeps it ill both forward forward TLS this time I'm not I'm going to use the HTTPS port number 8443 because it should already be exposed over HTTPS so 8443 port number so if I try to access HTTPS number we're giving yes here because the application should be accessible over HTTPS yep if you're seeing the screen which means you are yourself sign certificate is already used by the site so it's a common message that you will see in a self-signed certificate accept at once accept the risk and continue ok we are those files are loaded so we are able to access application or HTTPS if you want to still can't him go to file system browser TLS directory and you can see that ago ok let me stop it here so from the secret perspective what all you need to know is out of the box kubernetes provides a secret resource for sensitive information by default is ok that's all we discussed and config map and secrets with this this section come to an end any questions on this in general in the productions we will use systems like hashy cop vault and we will integrate with kubernetes to manage the secrets it provides a lot of capabilities compared to the outer the secret storing the secret in the etcd instead of that storing the secret in and walled and then integrating with the kubernetes so that your secret are maintained in the world and it will be injected into your application during the runtime that's an common implementation as you got word ok you can also refer about this later I'm sure your organization might correct you are right but yeah it's a bit out of context but I will try to cover it ok I'm not going to take it now but I will make sure it is covered we have we use multiple approaches I will cover that welcome any other questions if no questions please do this secret exercise as you're working exercise along with your lunch break I'm giving you 15 minutes so that you can use this to try if in case if you missed to do any exercise in the past from the morning please make use of this time and also thank you thank you for listening
on 2022-10-26
language: EN
WEBVTT Hello, welcome back. All right, so with this change I'm going to apply this YAML file. This is already says in bound state. We created and already the existing PV bound to this. You can get PV and you can verify that. Look at it. My PVC from NPOV and NS. This PV is bound to this PVC. Which means all good. If you go now and if you submit this YAML file SQL. The name of the PVC is my PVC. Then your SQL server will look perfectly fine and all the data that is in here will go to the. Most part. OK. So what is more important here is understanding what PV and what we did is and static provisioning. From that access mode and then you can specify the reclaim policy and all that we will discuss after all of this information. Create a PV and then from your application perspective for your application need you are going to submit a PVC. And this PVC is referred in your part specification. OK, if you are submitting a PV and it is unbound state, if you try to spin up the part, it will fail. Because it will try to mount the volume, no volume map to the PVC. So part will won't start up. So you need to first make sure it is bound. Then only you need to run your application. OK. All right. So now what I'm going to do is I'm going to create one more claim. Copy this. I'll do that a different claim. I'm going to rename this. I.P.V.C. Let's say called it as in my PVC test. And here I'm going to call this as an. Proc PVC. And. They don't need this label center. Let's remove this. Keep it simple. Access mode 1 GB. Read rate many. That's it. And I'm going to remove the storage class name also. I want 1 GB. This is the only requirement that I'm asking to the Kubernetes. Through PVC. Right. Let's see what happens if I play this. If you do. Why I can have. My PVC. Yes. If you get PVC. Or. PC. PC. So as you can see here. This is also showing no bound. And it is bound to also involve you. The volume name looks very. Random. And the storage classes standard. If I do get PV. If I do get PV. You've sit here. Get PV. What happened to my machine. Okay. Here we go. This is the one that we created. This is the one that is dynamically provisioned by the Kubernetes. Using the storage class name standard. Okay. So this means. If I query for. Get SC. As part of the mini cube. Installation. There is one storage class that is that. Name of the storage class standard. And that is set to default. And it's using mini cube host path provisioner. So I submitted a PVC. Without specifying any storage class name. So the default one executed. And then it bound that. Dynamically provision volume to your PVC. Which means. What all you need to do is. If you want to. Let's see in your in your team you are going to use storage from different providers. Let's say. You have. You are going to use Azure disk. GCE persistent disk. And then you are going to use the host path. Let's say. So for each of this you are going to have some kind of provision. Provisioner. Basically installing it as a plugin is going to give the provision. Provisioner. Okay. Provisioner is the one that's going to actually provision the word. I mean creating the volume there. And then. This is the one that's going to do every step. So by referring this provisioner. You need to create a storage class. Storage class. Let's say the name of the storage class is Azure. This name of the storage class is GCP. Name of the storage class is. Minikube. Let's see. So while users are submitting PVC. In their YAML file. They can specify which storage class they want to use. For events application. He wanted in a Google platform. So he can simply give GCP. Because storage class is GCP. The moment when you submit. This storage class will be used. This provisioner will be used. And dynamically a volume will be created here. And then it will be bound to this one. If colon submits. Which storage class name is an Azure. Then this provisioner will be used to dynamically provision. If Darlington submits without storage class name. Then out of these three. You can set one as a default. So that if anybody submits without a storage class name. They will get involved from the Azure system. Azure Distro. Okay. So. You are going to statically provision or dynamically provision it. So we already seen one sample for. Statically provisioning it. And we just seen one sample on. How. How. If a sub-adv.pvc. Standard provisioner provision dynamically provision a volume. By all means. The PVC must map to an PV. Or else. Your application pod that covers the PVC. It won't start up. Okay. So with this. I am going to give a pause here. For any questions. If no questions. And stop. Any questions. Is this clear? Guys. Good. Basically. Resizing the. Volume. That was the question right. Yes you can resize the volume. Not PVC. PVC is just about. The time when you bound. That's the time it will take effect. It will work. That won't be a problem. It's up to one sponsor buying. But mostly yes we will end up updating the. PV and PVC and then submitting it. With the extra size. Provided. Let's say you are you are created a PVC. Let's say you are using a dynamic provisioning. And you said 1GB. And already I showed this 1GB created. Now if you want more space. Then. You need to do that update here. In the in the in the. Already allocated PV it will it will do update it. So this. This should your back end. Storage provider must support that. Kind of expansion. It's not not with the cubanets. It's with the storage provider. Some storage provider supports it. Some storage provider doesn't support. But most of the cloud provider. Solution supports it. Okay it's. Okay it's reclaimed policy. And we are going to discuss that. But let me ask you asked about it. What what should happen. If you delete a PVC. Okay that's what the reclaim policy is all about. Let's say we have only three values. Retain. Recycle and delete. So for dynamically provisioned volumes. The second one that you created you seen it. Delete because that's the default value. Which means. It will be automatically deleted. If you delete the PVC. If you delete the PVC. PV will also. The associated PVC will get deleted. If you set the Reclaim policy as delete. So if you don't want that. You want the PV to be. Because PV is holding some data. Why deleting the PVC should delete the PV. If you want to retain then you can specify the retain policy. So which means. If you delete the PVC. PV won't be. The associated PV will not be deleted. It will be moved to Released phase. So that you can do some manual Requirements data. So that is just. I think recycle will simply recycle the data. And it will. What recycle will do. I forget it. Let me check. Retain is the common Thing that we use. Recycle recycle recycle Means the volume back into the pool of Unwrapped once it is released from the claim. Okay it will be. Back into the unbound. It will be set to unbound. So that other PVCs can simply use it. I think that will be deleted. Data will be Deleted and then The status will be changed to unbound. Which means if there are any new PVCs coming. Maybe if match is found that PV will get hooked with. Okay that's it. Any other questions? Okay if no questions. Please give it a try. Let me share those. All the two Amos that I used here. So that you can have this as a reference. Or you know what to do. Support those two properties. But let me copy paste it. Last two minutes to complete this activity. Alright time up for this activity. Let's go to the next section. So what we just completed. PV. PVC and then storage class. Okay. So the next one we are going to discuss is Pretty straight forward resource. A simple very very Simple resource. And it has a valid case. Use case where we can go for using one such resource. In your cluster. Let's say you have three nodes. And Even is the administrator. He wants to See how Each of these nodes are performing. So for that. He wants to Run some kind of Software here in this machine. So that it can gather some metrics about how These nodes are performing. So he wants to run one here. And show one software here. One software in this machine. So that it can keep observing this node. And He can see all of this Metrics gathered from These machines in another software. And he can see some kind of Visualizations. So more often. An agent like tool. Isn't it? An agent or demon like tool. Or he wants to run a lock collector. One lock collector in every node. So in kubernetes world. Everything will run as a pod. So if even wants to Deploy this application. He may go with deployment. Setting the replicas. Replicas as entry. Replica will create three parts. But is this will be placed One. One copy of pod in every node. That is not the guarantee. It is not the guarantee that replicas. We never discussed about that. If you give the number of replicas. As number of nodes. It will always place just one in every node. It will try to Distribute. But it is not a guarantee. That it will run one copy of pod In one node. But here. In case of events. Metrics application. Or logging collector application. We need that guarantee. I want one copy of pod. In every node. Because matrix collector. Logging collector. Are those application. That falls under that nature. So for this kind of agent like or daemon like. Application processes that you want to deploy. And kubernetes. Kubernetes community came up with a new resource called daemon set. Which means if you create a daemon set. Daemon set will also create pods. But the reconciliation of the daemon set. Will be created by the daemon set. And the daemon set will also create pods. And the daemon set will also create pods. But the reconciliation loop logic of daemon set. Is create one pod. In every node. That is its logic. Let's say if you delete this pod. Then the next moment daemon set will. Realize that. This node doesn't have one copy of the pod. And it will immediately create that one. One per node. That's its logic. Daemon sets logic. That's it. That's it. That's it. Daemon set. Daemon set. Ensures that. All nodes run a copy of your pod. In bracket they also specify some. So what it means is. You have three node. But. The application that you are going to run. It may not run in node 2. Because node 2 is using some kind of. Legacy hardware. This software may not work there. So you want. This daemon set. To focus only on N1 and N3. And exclude N2. If you are not. By default it will. It will consider all the nodes. In scope. But this time you want to exclude N2 from the scope. If that is the case. Then. You can make use of the labels concept here. You can. Label the node 1. As type equals modern. Node 2 as type equals legacy. Node 3 with label as. And gives it a label node. N3 type equals modern. And then the daemon set. Specification. You can specify the node selector. And you can tell. Type equals modern. Which means. Now this daemon set. Will focus on. Only the nodes that has this label. Has one copy of this pod. So by that way N2 will be excluded. Okay. That's the reason it says. All or some nodes. Run a copy of your pod. And the use cases are. If you want to run a storage daemon. Log collection daemon. Or node monitoring daemon on every node. Then you can run those components. As in daemon set. Kind as a daemon set. And then here they are running. And fluently log collector. And then here is the daemon set. Typical what we discussed earlier. On the type is daemon set. This is going to create one pod. In every node. If you deploy this yaml file. All we have is only one node. So in that one node minicube. One daemon set will be running. Let's see if there are any daemon set running in the kib system. Namespace already. Kibctl. K. Daemon set. Namespace. System. Okay. Kube proxy component. It's running as a daemon set. As all we have is only one node. One is current. And it's ready and it's up to date and it's available. And node sector here is. All the node that has the label. Of the operating system is here next. Minicube node is in the next. So it's listed over there. Okay. Maybe in your infrastructure you can verify. Because you may have multi node. That you can easily check this. Alright. Any questions on the daemon set? Because it's straight forward. I'm skipping the hands on part of daemon set. Any questions? The use case or how it works. Why we need daemon set. Why not replica set at deployment. Okay. So that's it. Thank you. Okay. Alright. I take the silences and all good. Let's switch to the. Go ahead please. Right. Is this information helps? Okay. Mm hmm. Correct. No. No. Okay. Okay. So first of all. Rolling update is not applicable for daemon set. Rolling update is applicable. Only for two resources. One is deployment. Another one is. Stateful set. That we are here to discuss. Stateful set. So you can't do a rolling update for daemon set. No. Okay. That is with the daemon set. Later you can try that sample and do get daemon set. Later you can try that sample and do get daemon set. And then you can explore it. Now. Let's move to the next object. That is stateful set. Any idea about stateful set? Any idea about stateful set? Are you already working with the stateful set? Are you already working with the stateful set? Yes. Yes. No. I heard about it. Mm hmm. Okay. Okay. Fine. So let me quickly explain the case for stateful set. So let me quickly explain the case for stateful set. Why? Because the data layer is not a single resource. Because the data layer is not a single resource. So you can't do a rolling update for daemon set. So you can't do a rolling update for daemon set. So you can't do a rolling update for daemon set. So you can't do a rolling update for daemon set. So you can't do a rolling update for daemon set. So you can't do a rolling update for daemon set. So you can't do a rolling update for daemon set. So you can't do a rolling update for daemon set. So you can't do a rolling update for daemon set. So you can't do a rolling update for daemon set. So you can't do a rolling update for daemon set. So you can't do a rolling update for daemon set. And look at the service. Cluster, IBS and then headless service. Cluster, IBS and then headless service. Okay. So maybe you can just try this one. Looks simple here. Create a stateful set and do get parts and see the way it is created. Try to delete one. See how it is getting created. Try to scale up. See whether the ordering is maintained. Try to call one just by its identity. Okay. Just give it a try please. On the stateful set. That's it. That's all I had to cover for stateful set which is more than enough for the examination. Any questions on the stateful set? If no, please give it a try. So while I am explaining, I would recommend you to have the tubanist documentation for the respective resources open in your screens. Parallel refer it. Because during the examination you are going to heavily rely on only the staff documentation. The tubanist documentation. So you should know where to locate, where to find stuff and all those things. For the stateful set you can try the sample that is there in the tubanist documentation. Just copy this .ml file and then apply it and observe the behavior that will do. Get SDS. Sorry if I forget to tell that comment. You can recoup stateful set or get SDS to view the stateful set resources. Zero out of three ready. Let me look into the screen. Which example you use? What is the application you are deploying from the site? Look at parts. It will just spin one by one. Web 0 is in a pending state. Which means it is still trying to spin up. You can describe web 0 and see the event section. Is that what you are trying? No. Keep CTL described. Keep CTL described for web 0. Let's see what's. So this is about volumes. Can you look into your yaml file of your stateful set? It is not able to bind the volumes. Look at the volume claim template. You can see the file name. You can see the file name. You can see the file name. You can see the file name. You can see the file name. You can see the file name. You can see the file name. You can see the file name. You can see the file name. You can see the file name. You can see the file name. You can see the file name. You can see the file name. You can see the file name. You can see the file name. You can see the file name. You can see the file name. You can see the file name. You can see the file name. You can see the file name. You can see the file name. You can see the file name. You can see the file name. You can see the file name. You can see the file name. You can see the file name. You can see the file name. You can see the file name. You can see the file name. You can see the file name. You can see the file name. You can see the file name. You can see the file name. You can see the file name. You can see the file name. Okay, okay. Now time is 2.32. So let's take a quick 15 minutes coffee break and be back by 2.47. 2.45. Let's say. Thank you. Thanks for listening so far. I'm back. Please raise your hands in the teams. Just a quick calendar check. Thank you. Perfect. Okay, we are going to discuss our next object. Or maybe I will cover this just one or two objects. I have two objects to cover. Let me cover the theory part of it because in some exams they ask about it. But it is not part of the curriculum. But I heard in couple of exams they ask about it. So having an idea about what those resources and use case for that will really be helpful for you guys. Right? So the next two resources we are going to talk about is jobs and cron jobs. Any idea on what is this and you are already using these resources for your existing applications? Jobs. Okay. Let's look into the use case first. And then we can observe the behavior of this. Right? Jobs. So applications that we have seen so far the NGINX application or the code web application that we have seen. Those are all some kind of websites or web server that is a long running process. It will be running throughout until you stop or delete the deployment. It will be running throughout. Right? But there are some applications of nature which are short lived, short lived one off task. Which means they have some set up steps to do. And once it's executed to its entirety then that's it. So to run applications of this nature in your cluster we are going to use a resource called jobs resource. What is the behavioral difference? I can simply run this as a deployment or replica. That will also execute the same logic. Isn't it? But the difference here is you have some set of activities to perform. And your application starts from here. And all the way it executes to its entirety. It's successfully completed. So at the end it will exit with some success code. Let's say 0 as a success code. If your pod exits with an success code 0, I mean your application exits with a success code 0. That is means it's successfully done its job. Then the pod status will be marked as uncompleted. Never seen that scenario. Having just looking at the status as uncompleted. But while executing something went wrong in step number 9 or 10. And then the process exited with some failure code other than 0. Let's say minus 1. Then this is considered as an a failure scenario. What job will do is I created a job that created one pod. And this is the pod executing now. So when I created it will maintain something like this. 0 slash 1 completions. Which means it expects for one successful completion. So if this happens, happy path happens, 1 slash 1. Pod will be marked completed. Job will be marked completed. But if this happens, then that pod will be restarted again to execute all the steps. Expecting it to complete or exit with success code 0. If it fails again restart. Again restart. So there are some let's say some 6 tries it's going to do by default in an exponential fashion. Even after the 6 tries, if it still fails then the pod will be marked failed and the job will be marked failed. Which means it never met its completion so job is failed. If on this retry somehow if it worked then it will be 1 slash 1 that is job is successful. Okay. So that's the idea. Only if it is successfully exited then mark completed. If it is failed then try to restart it. Until you meet the count. Let's say the nature of application that you are going to run here is some kind of database migration script. If you deploy the same as a replica set with replica as in one the pod will run. It will run to its entirety and then it will exit. What replica set will do when it exits? Its current state is 0 so it's immediately restarted or it will create a replacement. This will again exit. This will again execute. This will again execute which means you will end up replopulating the data again with the same data. So it's not a valid case here. Because no matter if it exits with a success code or failure code replica set will always restart it. Because it needs to have one copy running always. That's the nature of the replica set. But that is not the case with the job. If you create a job it will also create a pod. If this pod successfully completed what mark successfully completion from within the application it should exit with an success code. Then it will be mark completed job is completed. If it fails then restart. When the failure scenario is going to restart. Restart or create a replacement until it meets the successful completion count. So which means you can actually set the completion count while creating the job. You can specify two properties. One is how many completions you want. Let's say you want 10 completions, successful completions. Let's say you want 10 completions. Let's say you want 10 completions. Let's say you want 10 completions. Let's say you want 10 completions. Let's say you want 10 completions. Let's say you want 10 completions. Let's say you want 10 completions. Let's say you want 10 completions. Let's say you want 10 completions. Let's say you want 10 completions. Let's say you want 10 completions. Let's say you want 10 completions. Let's say you want 10 completions. Let's say you want 10 completions. Again, part will be 2 slash 3, 3 slash 10, and then 10 slash 10. So it will go one by one. So if you want to speed up this process, you can set parallelism. Parallelism to 5, which means if you create the job at the time, you will have 5 parts running. Which means in a moment, 5 will be not completed, so 5 slash 10. In the next iteration, another 5 will run. Or at any point in time, you will have 5 parts running towards meeting this successful completion count. So by the time when you receive 10 completions, 10 parts that are in completed state, then job will be not completed. One use case where we will use this job heavily is work queues to fetch items from and centralize to work queue. Let's say you have a work queue. Multiple producers are producing messages to the queue. And let's say every day, 1 am you want to run a job, that's going to read the messages from the queue and process it. So the consumer application, right? That you can run it as a job. Let's say you can run a job with 3. And here I can specify parallelism as a 3. And completions, it will vary, isn't it? Today you have 1000 messages, yesterday you had 20000. Day before yesterday you have only 50 messages. So completions we don't have until. So if you set only parallelism and leave the completions empty, that is the use case for work queue jobs. So which means this job will execute until the queue becomes empty. Once queue becomes empty, then everything will be more completed, job will be more completed. Okay. So basically, I specified 3 scenarios for you under the job. First scenario is excuse me. First scenario is you are going to create a job that's going to create one part. So initially when you create it, it will be 0 slash 1. And one completion is expected. We didn't receive any completion. And after a while, if it is successfully completed, this will become 1 slash 1 and all good. This is marked as completed. Shatters will be set to completed, job is completed. That is one. Second is the same where we play with parallelism and completions. You have a fixed completion count to reach with this many parallel. And the third is you are going to create a job by giving only parallelism, no completions you will give. So this is the use case for centralized working on and centralized queue. So 3 kinds of 3 variants of job. Okay, so later you can try this. I'm leaving it up to you to try the job and job is also going to create a part. As I mentioned, you can see the status is getting completed and yeah, these are all the 3 types that I mentioned, non-parallel jobs, which means only one part is started. And the job is completed as soon as that part is successfully. Parallel job means you are going to specify the completions and parallel jobs with what queue means we don't specify completion, we will specify only parallelism. Okay, there are some other concepts here that you can refer later. So once a job is completed, it will be there in the cluster for around say 5 minutes. And this is also a configurable property, time to leave. So if it is successfully completed, it will be around for 5 minutes and then it will be cleaned up automatically, which means this will be deleted. Deleting the job will also delete all the parts that it created. Okay. Any question on the job? Is it clear? The use case and how it behaves? How it is different from the existing resource we discussed? Yes, no. If it is clear, I will skip the hands-on. If you want me to try, I can show some hands-on on the job. Okay, perfect. Thanks. Okay. Okay, I think theoretically if it is clear, it means all good. I have hands-on on my file as well as the documentation. I'm leaving it up to you to implement it. The reason why I'm skipping the job part is because it's not part of CKE, but I heard from two participants that there was a question related to Cron job. So you need to first know what job means so that you will understand Cron job. So I covered that part. Right? That's fine. You try it and if you have any questions or challenges, we can discuss about that tomorrow. No problem. Okay. So Cron job. It's same as a job, but this comes with the schedule. If you are coming from Linux or Linux Perl, we know the Cron tabs, right? You should do something like every Monday, every one heart, every once again. Something like that. So we will schedule. We want that to execute. We want, we have a piece of logic that we want to execute every Monday morning 8 a.m. Right? The same way you can define a Cron job and you can tell, hey, I want to run this Cron job every 30 seconds. And the Cron job specification, you can actually provide a job specification. So which means after 30 seconds, Cron job will create a job. And after that, it will create another job. After 30 seconds, it will create that same another job. Another job. Which means these jobs are going to create the parts that's going to do that short lived activities and mark completed. So every 30 seconds, you will have a job created by this Cron job. Okay. It's like Cron job is sitting on top of the job and then creating jobs. Jobs are going to create the parts that parts are going to execute some short lived activities. Okay. So here you can see that we provide the job template. Kindness Cron job job template. And then the schedule. So which means every minute. So a job will be created every minute that will run a container that will print hello from Q&A. Okay. It's something that comes with the schedule. It's going to create the job. Okay. So it's even the same scenario that we discussed here. If you want to spin up a job to run every day 1 a.m. to finish the queue, you can create a Cron job and specify the job specification so that every day 1 a.m. it will create the job object that's going to read all the messages from the Q&A process and then it will be mark completed. So once a job is mark completed, there is nothing like restarting the same job. You need to spin up a new job to do the processing for new setup for items. Okay. Job Cron job. Alright. So we go starting from the morning. The resources that we discussed is we first started with service and then we discussed several types within it and then Ingress that is one resource that we discussed and then we discussed on HPA, VPA, CA, alternated to auto scaling and then we discussed on conflict map, secrets and then after the lunch break, we started with PV, PVC and storage class. And then we discussed on D-Met set and then on stateful set. And then we just completed job, Cron job. Good job guys. We discussed many resources today. Good job. So with this, for momentarily, we are going to stop on the resources perspective because this image covered all the resources that we used to deploy the application and manage it. Right. So now we are going to focus a bit on a security gate. Let me repeat that. We are going to talk about a security gate because in your cluster, you have three nodes and in your master node, you have many components running and one component which is the important one is API server that exposes your cluster as an API and even the security gate. So, if you zoom in this API server part, let's say this is the API server. Whatever the request that comes in to the API server, let it be it comes from even or Colin or from me. All the requests goes through three security gates. Gate one, gate two and gate three. Three checks like how we pass through the airport. Security check and the cluster check blah blah blah. Similarly, if your request fails in one of the gate, you will be rejected. Your request won't be processed. So let's see the processing logic resets here. All within the API server, all the three things happens within the API server component. So the first security gate is authentication. There is an valid BMW employee. Second check is BMW or Benz. Sorry, I am confused. I was looking for Benz. This is BMW. Authentication and the second gate is authorization. Let's say this event and he is trying to perform some operation. Cubectl gate secrets. He is trying to give the secrets. His request goes here and he is checking whether he is an employee of your organization, credentials supplied, all good. So he first passes first gate. Second gate is authorization. Whether he is entitled to perform the operation on secrets. Whether he can give the secrets or not. Maybe only managers can view or only operations people can view not the developers. Maybe you have some kind of those checks. So that will happen at the authorization layer. And if he has those permissions, let's say he will move to the third layer that is admission controller. So as a administrator, this is the layer where we can have more control. So we can do mutation and validation here. I'm going to give an example of what it means. Three gates. Authentication, authorization, admission controller. After successfully passing through all these three gates, the request will be considered for processing by that. By the kubernetes. Okay. So when it comes to authentication, in the kubernetes, we don't have any user management or group management. Those things are externalized, which means you can, you may already have a system. You can take it and then integrate with the kubernetes. For example, your active directory. If you have a web hook, or with the client certificates, or bootstrap token service account tokens, open ID accounts, or your AWS IAM, Azure IAM. So in kubernetes, we don't have the concept of managing the users, creating a new users, nothing like that. You have an existing system, while bootstrapping the API server, you need to specify which authentication mechanism you are going to use. And then the configuration is related to that. So that whenever a request comes in, the API server will use that mechanism. And the authentication provider is going to give a response back to the API server. If the authentication is successful, if it is successful, some data about the user, like the group that he is part of, so on, so details. Okay. So we don't have much to discuss with respect to authentication, because all you already have, you are going to just integrate with the kubernetes. Okay. So in our case, you integrated your organization active directory, let's say, and even is a valid user, so it will pass through the first gate successfully. And the second gate, the request goes to the second gate. That is the authorization gate. Authorization. So here in this gate, we are going to check whether even can perform get secret operation or not. In kubernetes, we will use a concept called R back, role based access control. So this came all the way from the openshift world. They contributed this concept to the kubernetes community. Role based access control. This is pretty straightforward. So I'm going to explain and we are going to do a quick hands on because you will see a couple of questions on the exam with our back. Just list down all the subjects or resources that we discussed and then the verbs that we discussed. For example, the verbs are get create logs exact describe. What are the verbs that we discussed? Delete. List down all the verbs. List down all the resources, parts, secrets, config map, deployment, replica set. List down all the resources that we discussed. That's it. With this we are going to do a simple thing. What is that simple thing? First, we are going to create a role. Role.OML kind as in role. And under the specification here you are going to give a name for your role. Let's say the name is part reader. And under the specification you are going to tell allowed verbs get describe. That's it. Two verbs. On the resource part. So what it means is you created a role called part reader. That if one has that role he can perform only get and describe operation on the part. This is just the role. This role is not assigned to event. Event or call it. Once you have the role defined, all you are going to do is create one more YAML file. Role binding. You are going to bind that role to a specific user or to a group. Group information is also not maintained in the kubernetes. Your authentication provided the group information. So you can also bind a role to a specific group. If the user is part of the group then that role will get applied to him. So same kind as and role binding. Under the specification it will give something like apply to user event role part reader. It can be user. It can be group. Let's say event. Which means this is the one that actually binds this role to the user called event. Which means event can perform only get and describe on the part. If you try to delete the deployment or do anything then he won't be able to do it. His request will simply be rejected. Because he is not authorized to do that. That's it. That's all we have. That's all. That's how we do the role based access control. So this role and role binding applies at namespace level. Only the namespace where you are creating. But if you want to create something that should apply at an entire cluster level then it is cluster role and cluster role binding. Which means this role and role binding applies at the entire cluster level. Okay. Role or back people. Okay. One sample on the back role. Part reader. Part perform only get watch and list. And another example for cluster role. The namescape is skipped here because it's at the cluster level. Secrets. Secret reader. Secret get watch list. Only these three operations. And if I want to bind that role to a user then role binding user event role reference part reader. The same goes for cluster role binding also. Nothing difference. Here we are binding it to a group with the name manager instead of user event. It's group manager. Okay. So that's it about the role based access control. So in our security gate based on the R back that even has let's say he is allowed to pass because he has the permission. So his request will go to the pass the second gate and then it will go to the third gate that is admission controller. Admission controller. So which means this is just a configuration of the APS server itself. So by default Kubernetes provides some 30 plus admission controllers. 30 plus admission controllers. Which means if you aren't properly enabling set of admission controllers it means your APS server is not yet properly configured. So you need to enable some set of admission controller for your APS server to function properly. So it's about there are many features that are just disabled as and you need to enable those admission controllers if you want it. For example if even is submitting a request in part specification and in the part specification he didn't specify resource request and limits. He just submitting it without resource request and limits in it. So at the admission controller level we have an admission controller if we enable it that will simply reject the request. Saying hey include the resource request and limits I can't admit you. I don't give admission for you in my cluster because you are not giving this required details. So you can do validation like this if it is found if not rejected. In some cases it can it will also mutate which means assume he submitted a request for the part but it doesn't have any namespace information in it. So at the admission controller level your admission controller can manipulate your request. It will just include namespace as default. And then it will be considered for processing. So mutation also will happen. So what you submit is not what you are going to see because in between an administrator can mutate it at the admission controller level. Let me give you one valid example here. Let's say multiple teams are submitting applications to your cluster and you decided to run a sidecar along with every application sidecar container and this is going to do some kind of helpful things for you to manage the cluster. So when they submit the application team they submit a specification with only their container definition. But here in the admission controller layer level you can include one more container in their specification and then submit for the processing. So as administrator you have more control on this layer. You can do validation or mutation. So if you look at the documentation there are many application controllers I think that is valid for you. You can simply enable it. The API server level. So this is how we will enable or disable name of the admission controllers. If you want you can also write your own admission controller. Let's say you observed one behavior. You submitted a PVC without a storage class name and by default the standard default storage class that behavior happens because in your cluster default storage class administrator admission controller is enabled. If you disable it then nothing will happen. If you submit it nothing will happen. No dynamic provision will happen. Nothing will get assigned to your PVC. So it's the API server functionality itself. So you need to enable right set of admission controllers for your cluster to behave properly. You can refer this documentation. They have definition for every admission controller. For example namespace exists. This is an admission controller. It will check all requests on namespace resources other than namespace itself. If the namespace reference from a resource doesn't exist then the request is rejected. So if you are trying to submit something with the namespace that doesn't exist then if this is in place this will simply this admission control is enabled this will reject your request. Okay. It's in the documentation you can search for it. So three gates. APS, authentication, authorization, admission controller. But what is more important for you from the exam perspective is R-MAC. R-MAC is the access control based access control. So now we are going to do one quick hands on for R-MAC. That's two to three questions you can expect from R-MAC. You can give some example from documentation. R-MAC. Okay. Let's go with this bitnomish documentation. Something that we can trust. Okay. Let's try this one. Let me click this in the URL. Click this URL. First of all, we are in the Minikube. Don't forget that. So in the Minikube we need to just stop your Minikube stop and then start it again with RBAC enabled. Enable the RBAC in the Minikube and directly go to this use case one section. Use case one. Just try only use case one. That page has many use cases. Try only use case one. So what you are going to do, you are going to create a user employee and he is going to be part of a group bignomy. And you are going to add necessary RBAC policies so that the user named employee can manage deployments only inside a specific namespace. Namespace called office. If you have another namespace in proc, he won't be able to do anything in that namespace. So for that you are going to create a user, create a namespace, create a role, create a role binding. We are going to do it all. So it has multiple steps. Creating the namespace, creating the user credentials, creating the role, creating the role binding and then finally verifying it. Okay, some set of ten comments or so. Let's take some five to ten minutes and give this a try. Role based access control. Go ahead, go ahead. The last topic of the day. Let's see who is going to get it working first. If you are already completed, just let me know so that you can share your learnings with others. Okay, is there any error you are seeing? Is that an error that you are facing? Who is this by the way? So that you can look into your screen. Shahid. Perfect. Thanks for that. Yes, please. Okay, okay. It's in yeah, yeah, yeah. Just modify the version. Look into the documentation and update this one. Maybe from the version beta one, maybe by the time when they documented it was in version beta one. I think right now it is in a different version. Let me check the version from the documentation. This is for the role right, role or role binding. It's version one. rbac authorization q8.io k8.io slash v1. No beta. Okay. Perfect. You completed that? Perfect. Good job. Would you like to share your learnings with everyone quickly on what you find? If they are comfortable, I will help you. Perfect. All right. Last two minutes. To complete this activity. For this. Let's cover last little more theory part and then we are also doing the training. We already did some kind of high level review on the resources that we covered, right? So we sing this HPE, VP, so on. So I'm now to this list. Just discussed about rbac also. Correct. So that is last, one last thing that I would like to discuss and then we can end the training. Is that okay or you base already feel it's already a lot for day two. What's your feedback? Shall we end or can we make use of the next 15 minutes? I'm leaving it up to you. Continue. Okay. So by any chance you are using Helm for packaging your applications? Helm. For all your resources. Okay. So everybody in the group already know how to use Helm, right? Okay, okay. No problem. Just asking for the purpose of examination. What you need to know. I will just cover it. It may be basics for most of you. But can be under pressure also. So if I want to just deploy one single service to the kubernetes, I need to create multiple YAML files. So based on the discussions that we had on day one and day two, first you need to create an YAML file for deployment. You need to create one YAML file for a service. I'll share my screen. I hope you are seeing it. Deployment and then service and ingress you need to create one YAML. And for if it is using configuration or config map and for the secrets, right? So at a minimum, you need to create this many resources. Correct. If this is for version 1.1. If you are going to deploy version 1.2, for PV and PVC area. It is using PV and PVC. You need to create that. If it is 1.2, maybe changes is not required in all the YAML files. Maybe at least it will require here. And based on some scenarios, it can require somewhere in the config also. Correct. So for one single microservice, you need to maintain this many artifacts for different versions. So think of maintaining some 20 microservices. So this will become already an headache for you, right? So we use tools like, for example, you might have used package manager like APTR APTK. And then you simply give APTK install and they give some tool name and immediately all of the required things are will get installed and you can directly start using the tool. But in my case, if I ask you to install this application YAML in the Kubernetes, then first you need to create CMN secrets and then you need to create a PPC and then go to deployment and create service ingress, not just involve. Is there a way this can be simplified so that I can do something as simple as this install YAML. That should take care of deploying all of these artifacts and the application should be up and running straight away. So that's where help shines. So you are going to package your Kubernetes application as an Helm charts. So if a package.put that I'm going to get is a chart. And this chart will be maintained in a chart repository registry. Where all the application charts will be maintained. So if I execute a command like Helm install YAML, then it's going to download the chart from there and then it's going to install all of these components. I've set up all of these components in the Kubernetes. If I want to remove the application and install Helm and install. That's it. Okay. So basically Helm is an package manager for Kubernetes. So what we need to do in examination you will be asked to quickly create a chart for your application. And we are going to try the commands tomorrow. Anyway. So what we need to do. There is a proper structure we follow. We will create a folder called template and then move all of these files. Move all of the files and open file by file. And you are going to templateize each one of it. Which means you are going to open this file. Right now you are seeing an attribute like engine next 1.2. You are going to template this. You need to move this. What about the component that will change. So similarly in every file you are going to open and template it. Which means at the end what you have what you all have is a template only. Correct. So in the same directory template you will have a values.ml so this is the one that is going to have all the values. For example in the values.ml you will see image node and this will have the tng next 1.2 or so. So what about the values here. It is going to get populated into these templates. And also you will have chart.yml that will have some information about the version and the name of the application and so on. In addition to that you will have some other couple of other files as well. So once you have this proper folder structure let us say you are going to execute helm package command against this. Then this is going to give you the chart. These templates are going to maintain the registry repository. And this is the chart that you are going to search for it and then install it in the register. So helm altogether is a different tool. It is not part of the core kubriss distribution. You can refer the site later but tomorrow we will do a simple example also to package it. In exam they would expect us to package helm chart from the set of artifacts. So we should know the basic commands. As long as we know that then we are all good. I think it is enough for the helm high level overview. That is it. We successfully completed day 2. And I hope you guys learnt a lot of things today. Do you have any specific feedback that you want me to correct for day 3? How about the base in day 2? Any specific feedbacks? Did you guys enjoy day 2? Welcome event. Good, good. Thanks darling. Glad to hear that feedback. And Frankoise. Thank you. And Colin. Okay, so this was the question that you asked about the morning, right? About dealing with the certificates. Am I right? Okay, okay. Okay, okay. This is not that. I will check if I can share something with you tomorrow. Thank you Colin. And over to you Shahid. Perfect, perfect. Thank you. And then Shalangani. Yep. Over to you Shalangani. Yep. Thank you. Thank you so much. I'm really glad that this day really helped you guys. And keep that excitement. Tomorrow it's going to be full of activity days. I will explain stuff. But you will have more activity to do. More from an exam perspective. So we will go through the CKA curriculum line by line. And we are going to try hands on for that. Okay. So gear up for tomorrow. And enjoy your rest of the evening. Thank you. Bye.
on 2022-10-26
Visit the Certified Kubernetes Administrator Exam Preparation course recordings page