Certified Kubernetes Administrator - Exam Preparation


language: EN

                  WEBVTT
and the pod is running.
If you want to access it,
you know how to do it.
CubeCTL, port forward,
the name of the pod,
any port number in the host machine,
and then the port within the container.
Go to localhost 8082.
Okay, a simple React web application
that we will use for demo purposes, right?
Fine, we deployed this application.
Now, on top of this specification,
we are going to add more configurations
or more specifications, okay?
The first one that we are going to add
is related to health check.
What it means?
Let's say I have a container,
and this container is running my application process.
And there are a lot of requests that are coming in,
and your application is processing it,
and it is sending some success response back to the callers.
Now, here comes one request.
While processing this request,
something went wrong in your application code,
and then the application process exited.
Process exited,
because it is the only process
that is running inside the container,
or the parent process that is exited.
Container has nothing to execute,
so this will also exit.
Container will also move to exit.
Exit process, move to exited state.
Container moves to exited state.
So for scenarios like this, platform,
that is, let's say, kubelet.
Kubelet component or the kubernets will react to it.
Because container status is exited,
kubernets will try to restart.
Restart the pod, or sometime it will try
to create a replacement pod.
So basically, a platform is taking an action
when, if the container is exited, okay?
But the same scenario, it was working until a request came,
and this, while processing this request,
your application went into some kind of stalled mode,
or it went into some kind of deadlock situation.
So which means the process is not exited.
Because the process is not exited and running,
the container status is shown as an running,
and kubernets, of course, won't take any action,
because it's already in running state.
But all the requests that goes after this request,
it's all getting some kind of error responses to the user.
This is our facing error.
Because this service is no longer retaining any responses,
all the requests are getting timed out,
because one of the previous requests
put the application into some kind of stalled mode, right?
We really want kubernets to take an action in this scenario.
But kubernets doesn't care about this scenario,
because it refers only the container status it is running.
The reason why it is showing it's running,
showing it's running, because the process is running.
It's not showing any parent processes in running state.
But in reality, the application is unhealthy.
It's not in a position to serve requests.
So we want kubernets to take an action
if a scenario like this happens.
So that's where the concept of health check
comes into picture.
So what we are going to do,
is we are going to run the container status.
We are going to take the health check
close to business code,
which means if this is your application
that is exposing many business endpoints,
let's say product endpoint, search endpoint,
there are many business endpoints
that this application exposes, correct?
In addition to that, you are going to write,
your developer will write a code
to expose a endpoint called health,
and calling this endpoint will simply return
and send some success code, that's it.
Nothing other than that.
If the request made to here,
then the response will be sent as send 200.
Okay, as simple as that.
So what Luis will do is,
now while submitting the pod YAML file,
Luis will include a section called liveness probe,
wherein he will specify this information.
Hey Kubernetes, this is the endpoint.
Call this endpoint every 30 seconds
and expect a successful response.
If you are getting a failure response
for three consequent times,
that is a failure threshold,
then consider that this application is unhealthy,
and consider that the liveness probe is failed,
which means if a scenario like this happens,
KubeLit is already calling your health endpoint
every 30 seconds.
If a scenario like this happens,
then for the next 30, 30 seconds,
KubeLit will get an error response.
So threshold will meet,
so that time Kubernetes will understand that,
okay, this particular container has become unhealthy.
So it will restart the entire pod,
so which means if a liveness probe fails,
the action that Kubernetes takes is restarting the pod.
Assuming restart is going to be the fresh start
of the application, everything should be working fine.
Should be working fine after the restart, okay.
So this is the exact scenario
where we want to address with the health check,
and we are going to configure it
as part of your pod specification with the liveness probe.
And then you will specify all these inputs.
Once you specify it,
Kubernetes is going to perform that probe, okay.
So if I open the file
where we have the liveness probe configured,
okay, I think it will be better
if I increase this font size.
Font, font size,
okay, so the only new addition is this section
compared to the previous YAML file,
the same specification, the new addition is this part.
Liveness probe, it's for an HTTP endpoint healthy,
and periods against is 10,
which means every 10 seconds,
Kubernetes is going to call this endpoint,
and it will expect 10 success response.
And the failure threshold is three,
which means for the consequent three attempts,
if Kubernetes receives an failure response
from this endpoint,
then that's the time it's going to declare
or assume that this pod is unhealthy, okay.
We have some timeouts against configurations also,
which means if the call to the endpoint timeouts,
it takes more than one second,
that is also countered as an failure.
Initial delays again means,
once the container spins up,
it's going to wait for some five seconds
before it starts with that health check.
After five seconds, every 10 seconds,
it's going to call the healthy endpoint, right.
So those configurations, right.
So now you can see this in action also.
Let's say I'm going to delete the pod
that I created earlier,
because the pod that I'm going to create now,
it's the same name,
one, two, cardboard health,
kubectl get pods.
You can see it is already running.
So now I'm going to do the port forward
to access the application.
Let me refresh this.
Okay, we're already here.
Just to demonstrate,
we are going to simulate the scenario.
If liveness probe fails,
how Kubernetes is going to act on that scenario, right.
So let's say,
Kubernetes is already calling this application
in this healthy endpoint every 10 seconds, right.
Every 10 second, if it calls,
this application is printing it in on a table.
Okay, you can see already it's called,
and this application sent 200 response
back to the Kubernetes, right.
Now we are going to intentionally
send a failure response to Kubernetes.
Fail for next three calls,
because three is the threshold that we set.
So if I click on three,
then we are going to send 500,
and for the next call,
again an error code or a failure code.
The third time it's going to send,
the threshold will hit,
and it's going to be restarted.
Okay, as you can see,
already the pod is restarted.
It's waiting for the initial delay second of five,
and after that, you will see again,
the call happened from Kubernetes.
It's going to continue for every 10 seconds.
So in this case, we simulated this behavior,
where the call to the healthy endpoint failed.
Let's say this happened.
If all the business endpoints are not accessible,
then the Kubernetes won't be able
to access the healthy endpoint also.
So which means we are taking that liveness check
near to your application code,
instead of relying only on the container status
or the process status.
We are taking it near to the app code.
So that's the whole logic, right.
So if I go back here,
you can even see those informations in the describe command.
If I do kubectl get pods,
you can see the pod is already restarted one time
because it failed liveness probe.
If you want to see those information,
you can look at the event section.
I think this is the cod pod, right?
Or the my pod.
Okay, so here you can see liveness probe failed
with the status code 500, warning unhealthy,
and then the pod is restarted.
Okay, so liveness probe failed, so restarted.
So the learning from here is what is liveness probe
and why we need liveness probe.
This concept explains that.
How to configure it?
You already seen an AML file.
If the liveness probe fails, the action is restart.
That's it, no complex access,
just plain restart of the container.
Okay, so this liveness probe,
as you can see here from our application,
we exposed an API slash health endpoint, isn't it?
What if your application is not an API?
It's some kind of worker process that doesn't expose any API.
For those purposes, we have different options.
Under the liveness probe, we have HTTP probe,
that is the one that we just seen.
We will expose an endpoint.
We also have TCP probe, and we also have exact probe.
TCP means you are going to probe to a specific port number.
As long as the port is reachable,
it will be healthy, healthy, healthy.
If port is not reachable,
then it will hit the unhealthy scenario.
Exact probe means you want to execute some command,
and that command execution
should return a success or failure code.
Let's say zero is a success code, minus one is a failure code.
After execution of the command every 10 seconds,
as long as you receive the success code, then healthy.
If it is a failure code, then unhealthy.
So we'll go for using this kind of probes
where exposing an HTTP REST API endpoint is not possible.
You can see here, this is using an exact probe.
Basically, what this is going to do,
every five seconds, it's going to open this file,
temp-healthy.
Okay, so as part of this file, this is creating that file,
and then slips for 30 seconds,
and then remove that file, slip for five minutes.
Right, if you try this, you can see also,
if it is not able to find the file,
then it's going to be the failure scenario.
Get them health returns and failure code,
then the pod will be restarted.
As long as if it is able to find the file,
then nothing will happen,
which means the application is healthy.
So a simple command.
So this purely depends.
Your application team should know
when we should tell that application is unhealthy,
when we should tell that it is healthy.
Based on that, they are going to write
some exact commands here.
Okay, that is the exact probe.
This is something that we just seen,
that is HTTP probe, and the last one is TCP probe.
I'll just specify the port number.
Okay, so which means the cubelet will attempt
to open a socket to your container on the specified port.
If it can establish a connection,
then they consider container is healthy.
If not, consider the container is unhealthy.
Okay, three types.
Based on the nature of application,
you can configure HTTP probe,
exact probe, or TCP probe.
Okay, so this is for liveness probe.
And if the liveness probe fails,
the action is restarting the container.
So similar to liveness probe,
we have one more called readiness probe.
This serves a different purpose.
What is the purpose?
Let's say later, in some time we are going to create,
let's say you are deploying Nginx
with five replicas, five replicas,
which means five ports will be running.
And here you have one another service
that wants to call Nginx application,
your application, my Nginx application.
So which means you have five instances,
the request needs to be load balanced to one of these five
so that one part will process it
and send the response back to the caller, isn't it?
So for that, we are going to create
a services resource in Kubernetes
that we will discuss in a moment.
And this is the one that's going to load balance
to one of these five.
So in the readiness probe,
you are going to specify the same similar configuration
as how we will specify for liveness probe.
Maybe we'll expose an endpoint called ready
and you will specify every 10 seconds
that should be executed.
So based on that, let's say this container
passes the readiness probe,
then this will be set to ready, ready state.
And this process ready state, ready state, ready state.
Let's say if this fails readiness probe,
it will be set to not ready state.
So by the time when a request comes in,
this service component will consider only the replicas
that are in ready state, that is one, two, three, four.
And then it will load balance to only these four.
This will simply be excluded
because it failed the readiness probe.
Because it failed, it is in not ready state,
so it won't be considered for the load balancing.
Maybe after a moment, it will turn to the next calls,
it may succeed, it will succeed the readiness probe.
So that time another request comes in.
This time, this will also be considered
because it's already transitioned to ready state.
Okay, so you can have some kind of logic here
to determine when your application is ready.
Maybe it's still processing the previous request,
or you don't have enough sufficient thread pool
to process the request.
So you may want to fail the readiness probe
so that you won't get any new request,
process the one that are already in it,
and then once you complete it,
then maybe the readiness probe will succeed,
and then it will get more requests.
So when a liveness probe fails, that is for a different case,
and the part will be restarted.
If a readiness probe fails,
it's just that that part won't be considered
for load balancing, that's it.
Once it transitioned to ready, it will be considered.
Okay, so that's what you can see under the readiness probe.
I can show it, yeah, this one.
The same for the readiness also,
you can specify the exact HTTP probe.
Here it is an exact probe,
it's configured similar to the liveness probe, okay?
The only difference is instead of liveness,
you're using the readiness.
So in our example, if you go to the,
maybe the code full, you can see the readiness probe here.
Same configuration,
it's just that we are calling another endpoint.
This may have a logic to tell whether their application
is in a position to accept the request, new request,
or is it still busy with processing the existing request?
Good thing is for the both liveness and readiness probe,
already the framework, there are frameworks
that already provides this endpoint,
or if you can include some libraries
that will take care of exposing these endpoints,
that will determine,
or that will take care of exposing the endpoints
with the logics to say whether they are not ready.
Or your developer can take more control
and they can write their own logic
by implementing this endpoint, okay?
But by all means, now we are clear with these two probes
and the purpose of it,
and what will happen if this probe fails.
Any questions so far on this?
Any question, or is it clear?
Okay, good.
So before trying hands on for that part,
so in the simple part specification,
now we learned how to add the two probes,
that is readiness and liveness probe.
Now, for the same part specification,
we are going to add few more configurations,
which are very, very important, right?
One is, let's say I have Louis,
and let's say he's a beginner, or he's a trainee.
He wrote one application,
and he deployed the application into the Kubernetes cluster.
So, now we have three nodes, N1, N2, and N3,
and his application is running somewhere here in the node one.
And as administrator,
Hermos is already watching this application,
and he founds out that this application has some problem,
which means before on her, it was using 0.5 CPU
and 256 MB of memory.
And over the period, it seems it started to use
more resources, even though this application
is sitting idle, or it's receiving or processing
only few requests, but then it keeps consuming
more resources, but then it never releases the resources
back to the pool once it's done with the processing,
which means this application has some serious
memory leak issues.
And it's not releasing the resources.
So, which means over the period,
this guy may eat all the resources,
leaving all other containers or parts
to starve for the resources, resource starving, right?
So, clearly, as administrator,
I don't want this scenario to happen.
If something seems to happen like this,
then I want Kubernetes to take some action.
So, for that, you need to educate the Kubernetes
with some information.
So, while submitting the part specification,
you need to provide some information to Kubernetes.
Okay, what kind of information?
You need to provide resource information, resource,
limits, you need to set the limits for CPU
and then the memory.
As a developer of this application,
you must know, because you must thoroughly performance test,
load test your application,
and then you need to benchmark
and come up with the values for CPU and memory limit,
maximum CPU and memory your application will consume
because you already load tested, performance tested,
and you should benchmark that value.
Let's say my application won't consume more than 1.5 CPU
and memory more than 516 MB.
Let's say this is the limit,
you need to put it in the part specification.
And if you apply to the Kubernetes,
now Kubernetes has this information
and it's going to keep an eye on your container.
So, if your container starts to consume more
than what is specified on the limit,
then Kubernetes will decide that there is something
going wrong with this guy and Kubernetes will take an action.
The action can be a restart or sometimes it will delete
and it will create a replacement container.
Okay, so that you will never get into a scenario
of this guy eating all the resources,
leaving others starving for the resources.
That is the limit, resource limit, maximum.
So this is the case with Luis, who is a beginner.
He did something and he ended up getting this advice
from the Hermos, who is the cluster administrator.
And now he had deployed the application
with this input, all good.
Hermos is happy, Luis is happy.
Now, we have another case.
This time we have Jasper.
And what he did is, he is an expert
and he wrote some kind of machine learning algorithm
and he want to run the application in the cluster.
And he submits the request and the container
or the pod schedule to run into node one.
But then this pod is not starting up.
It's still not showing running status.
It's in some error status.
And upon analysis, Jasper found out that
for this container to start up and work,
it minimum requires 0.5 CPU and 256 MB of memory.
But then in this node, you don't have that much resource.
Let's say you have only 100 MB of memory
or you don't have, you have only 0.2 CPU.
Okay, so which means this particular application
has some minimum resource requirement for it to work.
Minimum resource requirement, both on CPU and memory.
But how Kubernetes will know?
There is no way that Kubernetes will know
that unless you provide that information to Kubernetes.
As a developer, you should know it
and you should provide that information to Kubernetes
so that while scheduling,
scheduler will consider that while scheduling,
scheduler will look into your pod specification
and it will understand that, okay,
this is the minimum requirement
and it won't consider the N1
because N1 doesn't have that resource.
So maybe it will consider the N2, N3
that will have enough resources to run that container.
Okay, so that is the minimum resource requirement
that is required for your application to work.
So that you can specify it under resource request,
CPU limit, 0.5 CPU, 256 MB of memory.
So resource requests, resource limits.
Now you know why we need those two.
So here you can see resource request and resource limit.
Okay, so as I mentioned,
this is a time consuming activity.
You need to perform,
developers need to perform load balancing
and then they need to load testing, performance testing
and they need to come up with these values.
But then if you are running the applications on the cloud,
we have something called VPA,
vertical pod auto scaling.
What it means, if you configure VPA
for your application pod,
then the VPA will observe the CPU
and memory utilization of your application container.
It will observe the values, past values
and it will recommend the request and limit
for both CPU and memory.
This VPA component will recommend what to put here.
Instead of you are performing a manual benchmarking activity,
it will observe your application over a period of time
and based on that, hey, just set this to this,
this to this, this to this value, limits and requests.
It also provides an capability
where it can automatically set the values also.
It can recommend and if you need it, you can update it.
If you configure it a bit more,
then this VPA will automatically set the request
and limits for both CPU and memory time to time.
Both options are possible with VPA.
Why I need to do this?
Because it's very important for you
to right size your application.
You need to right size your application.
It shouldn't be, it should have enough resources
for it to work and it shouldn't be eating more resources
than what it is required for it.
So we need to right size the application,
right size our application in the cluster.
That is more important.
Okay, so every specifications
that are submitted to the cluster,
you can demand or you can mandate their applications team
to have this information.
If not, you can simply reject the request
as a cluster administrator.
You can have more control at the admission controller
layer level where you can simply reject the request
that comes to the cluster
without resource request and limits.
Okay, and you can even automate this.
If you are in the cloud, you can make use of VPA
wherein it will recommend or you can configure it
in such a way it will automatically set the values.
How it does it?
It has access to the metrics of your application
based on that, based on the historical values
it recommends and auto sets that's it.
Okay, I hope it is clear.
Please let me know if you have any questions
before we go for a lunch break.
I'm going to stop here.
If you have any questions, I will address it.
If no, then we can go for a lunch break.
Okay, after the lunch break, I will give some time
for you to try this.
No worries.
Is it clear so far?
Any questions?
Perfect.
Vertical power auto scaling, yep.
All right, time is now 12.20.
So let's take a 45 minutes lunch break
and be back by 1.05 p.m.
Thank you.
Thank you for listening.
Hello, I'm back.
Welcome back, welcome back.
Please raise your hands in the teams.
If you are back to your desk.
Just a quick attendance check.
Okay, it seems we are ready to, okay.
Perfect, perfect.
Good, we got everyone back.
I hope you had a great lunch time.
So now it's time to continue our discussion, right?
So I'm going to increase the pace of the course
a little bit faster, right?
So fine, so what we completed before the lunch break is
we now know how to create an aml pod
using a declarative aml file, creating an pod from aml.
And we also learned a couple of probes.
We have different ways to implement it.
One is using HTTP.
I think I'm using a wrong pen.
It should be the pen from under the pods.
We learned how to create it with an aml.
Under the probes, we covered different types.
That is HTTP probe, exec probe, TCP probe.
So there are two different probes that we discussed.
One is liveness, second one is readiness.
If liveness probe fails, action is restart.
If readiness probe fails, action is don't consider
that specific part for load balancing, right?
And then from the resources perspective,
we learned the importance of setting both limit and request.
This applies for both CPU and memory.
And in alignment with that,
we also learned vertical pod auto-scaling, right?
To automatically set values for this
or get a recommendation for the values to be set
on this limit and request, correct?
So these are all the things that we discussed
just before our lunch break, right?
So there are a couple of other configurations.
Let's talk about that, and then I will give some time
for the hands-on so that you can try it all once
in one single example, right?
So we know we create a container from the images.
We have images, this access and template,
which means if you create a container,
this is going to get its own file system,
which means every container will have its own file system.
If you have container two, it has its own file system,
isn't it?
So let's say we have Velocity,
and she tried to spin up a container for customer database.
So she chooses a Postgres image,
and she's running a container or a pod
based on the Postgres image,
and it's running a database.
And there are many applications that are writing
or reading the data from this database.
So this Postgres process is going to store all the data
in its file system.
Let's say we have a directory called data.
Within the container, in a specific folder data,
you have all those millions customer records,
information about the customers, let's say.
And this is working fine.
What happens is here we have Hermos,
and he is a crazy guy.
So what he does is he simply deletes this container.
Let's say mistakenly he deleted it.
So the moment when he deleted the container,
there is no undo for that,
which means if you delete a container,
then the file system associated with the container,
it all will also get deleted,
which means you will lose all those million customer records
that were in the data folder
because you deleted the container.
It goes to any country, it doesn't need to be a database.
Let's say you have a backend API
that generates some kind of XML file in a directory
for every requested process.
And if it is going to store all those data
in this XML directory,
then if someone deletes the container,
you will lose everything that is there in the container.
Right?
So clearly this is not the scenario that we want to face.
Basically, here the requirement is,
I have something inside the container
that I want to process outside of container lifecycle.
Even if the container is deleted,
I want the data in this directory to be safe.
Okay?
So that is the use case for volumes.
That's the use case for volumes.
This is the problem that volume concept tries to address.
So the idea here is simple.
You have your velocity's Postgres container,
and all the data is returned to the data directory.
This is the directory you want to process.
So what I can do is, I can choose any location,
remote location or a local location.
Let's say this Postgres is running in an host machine,
host one.
I can choose a path from the host machine.
Let's say there is a path called lax,
or there is a network file share.
I have a directory here,
or I have a GCE persistent disk,
some storage from a cloud,
or I have Azure disk.
It can be anywhere.
So what you can do is,
you can mount these folders or these directories
or these volumes.
You can create a volume in an host machine path,
or you can create a volume in an Azure disk,
or you can create a volume in a GCE persistent disk,
or create a volume in a network file share.
Once the volume is created,
then you can mount that volume to a specific location
inside the container.
So let's say here,
I am mounting this host machine path
to the data directory within the container.
So which means, whatever the process writes
to the data directory,
it's actually getting stored in the lax directory
in the host machine.
So which means, if this container is deleted,
the data is still safe in the lax directory, okay?
So that I can spin one more new container
with the same volume mapping,
so that the new container can see all the data
that is left by the previous container.
Okay, so by this way,
we are persisting something outside of container lifecycle.
It can be one directory,
or you may have multiple directories
that you want to persist.
This directory to an NFS,
this directory to an Azure disk,
it doesn't matter.
Based on your requirement,
you are going to store it in some remote data center
or in cloud storage, okay?
So now, we know the use case for volume,
and then the problem it addresses, correct?
How to specify that volume in your pod specification?
That is our next question.
In the pod, you have multiple containers.
At the container level,
you are going to specify all this liveness probe resource
request limits, even the volumes, okay?
So using the volumes is a two-step process.
First step is, you need to define the volume.
And I say define,
you are going to give a name to the volume,
and then you need to specify the target destination
or location where you want to create the volume,
in the host pod, RNN, cloud provider storages.
So that type, provider type, you need to specify, right?
Once the volume is defined,
you need to mount the volume inside the container.
Two-step, okay?
So how to do that?
The step number one is defining the volume, isn't it?
So in the same orientation as containers,
you can see we define the volume,
the name of the volume,
and then where you are going to create the volume.
In a network file share, this is the server name,
this is the part in that NFS.
You can even provide more configurations,
like how much space you want to block, and so on,
so it is, but keeping it simple,
this is how it's going to be.
In a network file share, you are going to create a volume
with the name as a core data.
The remaining properties are the first
that's going to get applied, right?
So step one is defining the volume.
And step two is mounting this volume
inside the container path or a directory.
So that you can see it under container specification.
Under containers, we have volume mounts.
And here you are specifying the name of the volume,
and then the mount path.
This is the path inside this container.
That is, in this container, data directory,
it will get mounted with this volume.
So which means, whatever the application
writes to the data directory,
it will go all the way to this
network file share export directory.
Okay, defining the volume is part one,
and using or mounting that volume
to a path inside the container using volume mounts
is the step number two.
Okay, so later on day two, we are going to see
how we are going to refactor this logic
to move to persistent volume and persistent volume claims.
What is the problem or the challenge
with this way of defining the volume
compared to using PV and PVC, right?
But pretty much this does the job,
using volume and volume mounts.
Okay, so if you look at the production grid,
what specification, these are all the things
that you will always see.
One is the image specifications,
the application specific things,
and then resource request and limits,
liveness probe, readiness probe, volume mounts.
We are yet to see secret mass and config.
You will also see many specifications
related to injecting some configurations, right?
That's it.
So this is for the NFS.
I don't think I have a server with this name.
You can also replace it with a host path.
Let's say you want to mount a path from a host machine.
Then there is one type called host path,
and then to the type,
you are going to specify the path in the host machine.
Let me open this file.
Yep, as you can see here,
it uses an type as a host path,
the path in the host machine,
and then mounting that volume to the data directory.
Okay, there are many supported storage provider types
that again we will discuss
while we talk about persistent volumes, right?
But now we are clear with the workflow,
what it all takes to create a volume
and then mount it inside the folder.
Okay, so with this, we now know
in addition to this resource request and limit,
what it all takes to configure the volumes.
Defining the volume and then mounting that volume
inside the container, those two steps, okay?
Please feel free to stop if you have any questions.
So now, I think now we are getting more clarity
on what all the things that we can do
at the pod level itself, isn't it?
So if you still remember,
I mentioned about multi-container pod.
There are some use cases for multi-container pod.
So let me talk about a bit here, right?
So the first scenario that I'm going to talk about
is init containers.
This is there to serve a specific purpose.
For example, this is your main container,
a container that runs your business application.
And we will also run one or more init containers.
The purpose of the init container is, as the name says,
it's for executing some initialization logics.
Or in other terms, setting the stage
for the main container to run.
So based on your application,
you may have some complex initialization logic,
like mounting and volume,
or creating, checking some dependencies.
There are a lot of initialization logic
that you want to check.
So instead of having those initialization logic
as part of the main container,
you can have it in the init containers.
You can have one or more init containers.
And for the containers, we define like containers,
and then we define the first container, right?
Similarly, for the init containers,
we have a separate attribute,
like the same indentation as containers,
you will define the init containers.
And you will give the first init container,
second init container, third init container, like this.
So it's not like you will put it here.
That's meant for different pattern
that we are going to discuss, like sidecar.
You will run it.
But init containers, it has its own lifecycle,
which means if you have it,
init container defined,
it will execute in the same order
that you defined here, one by one.
First, this container will execute.
It will run to its entirety,
and then it should exit with an success code.
Completely, it should execute successfully.
Maybe it has some set of commands to execute.
And then the second init containers
must execute successfully.
Third init container must execute successfully.
If all the init containers completed,
basically it's going to have some short lived
set of commands to perform.
If it all successfully executed,
that's the time the main container will start up.
Because the stage is ready, now the hero can enter.
Right?
So if one of the initialization container fails,
then the entire part will be restarted,
again from first container, second container,
third container will execute.
So that's the behavior of init containers.
Let me include this links in the etherpad
so that it will be easy for you to refer.
In it containers.
In it containers always runs to completion.
Each init container must complete successfully
before the next one starts.
And this init containers,
it doesn't support liveness probe, readiness probe,
or so on so because it's not the long running process.
This is that to just set the stage for the main guy.
And the way to define it, as you can see here,
as like containers, we have an init container section.
This application has two init containers.
And here you can see here,
it checks whether the my service is already available
in the cluster.
So it's going to wait.
It's just checking one of the dependent services
already up and running or not.
This is checking for the database.
This is checking for end service.
If it is already available, then this will succeed.
This will succeed.
Then the main container will start.
Maybe the main container is
as part of its initialization logic.
It may want to communicate with my service and my DB.
So if these two are not up,
then if it spins up, it may fail.
So in this case, they are using the init containers
to make sure those two services are available.
And then it does, right?
So this happens only during the startup time.
Once init containers are executed and started up,
and the main container is started up,
then init containers won't execute again.
Maybe in the next start, it will execute.
So which means after main container is up and running,
if the my service container goes down,
then this is not responsible for that.
Okay, I think you got that idea, right?
So this is just for the initialization logic,
as simple as that, okay?
So I also mentioned about sidecar patterns, right?
Sidecar, sidecar, sidecar.
Where is the QBanis documentation?
Where is it?
Okay, that's fine.
I can explain it from here.
So this is one case for multi-container port.
Another case is in the same containers array,
you are going to define the first container
and then the second container.
Here, there is nothing like an order,
like this should execute and this should execute.
Both will run parallel.
This is going to be a long-running process.
This is also going to be a long-running process.
This is the specification for main container.
This is the specification for a sidecar.
And this is here to provide some helper functionality.
One of the helper functionalities,
you have a main container and you have,
let's say, a logging backend.
And your application is generating a logs
in an ABC format,
but your logging backend expects the log in an XYZ format.
So what I can do is I can run a sidecar.
Maybe the main application is writing to a volume
and this sidecar can read the log from the volume,
transform the ABC format to XYZ format,
and it can even forward the log to the logging backend.
So in this case, this sidecar does two operations.
One, transforming the logging from ABC to XYZ format,
and then forwarding the log to the logging backend.
So this is just one helper functionality.
So in this case, what it does is an adapter pattern.
So this is an adapter sidecar
because the target system experts in a different format,
the source generates in a different format.
So this acts as a kind of adapter between these two.
Okay, one case where we will use the sidecar.
And there is another case.
Let's say the same main I have,
and here I have an external web service
from a third-party vendor.
And dealing with the external web service,
which means there is a separate authentication mechanism.
If service is not available,
read-write patterns, circuit breaker patterns,
and implementing some kind of fallback mechanisms.
There are a lot of things that I need to handle
at the main container to call this service
and working with this external service.
So instead of having all those logics in the main container,
how about creating a sidecar
and offloading all the response with these two here?
So that as a main container,
I will simply call it as a local host.
As like any local method, I will call this method.
And this is going to take care of dealing
with this external service.
All the authentication, read-write,
circuit breaker logic, it all resides here.
So in this case, this is kind of acts as a proxy.
This sidecar acts as a proxy for this external web service.
So this pattern, we call it as an ambassador pattern.
This is also providing a helper functionality,
but the cases are different for these two scenarios.
Okay.
And always keep in mind
that if you are going to schedule a part,
let's say velocity, she is going to schedule a part
with two containers or three containers,
one main and two sidecar.
And in the Kubernetes cluster,
if this part is scheduled to run in a node number two,
then all the containers will run in the same node only.
That is the characteristics of part.
Because all these containers can reach to one another
via IPC, inter-process communication,
which means it can call this as a local host
and then hit the port number and reach to it.
This can call this as a local host.
Right, so which means all the containers in that part
will always land in the same node.
It's not like C1 will run here, C2 will run here,
C3 will run here.
That is not the case.
All the containers within the part specification
will always land in the same node.
Okay, just keep that in mind.
All right.
So with these discussions, our part topic comes to an end.
We deep dived into the part.
We learned the part lifecycle commands,
different things that will be there in the part specification
and then the use cases for multi-container parts.
With this, I'm going to give a quick pass.
You can directly try the 1.6.part.full.yml file
that has all the concepts in it.
You can try it or you can spend some time
to go through the specifications,
go through the documentations.
I'm going to give you here a 10 to 15 minutes, let's say.
After that, we will move to the next resource
that is replica set and deployments.
Is this clear guys?
Are we good with the face?
Is it all clear?
I need a few confirmations, please.
Okay, thank you.
Please, okay, go ahead, please.                

on 2022-11-21

Visit the Certified Kubernetes Administrator - Exam Preparation course recordings page

5 videos