Managing APIs with MuleSoft Anypoint Platform Training Course


language: EN

                  WEBVTT
we'll be using MuleSoft to integrate APIs.
So let's say there are two or three applications
or specific businesses using,
let's say a business has its workflows
specifically for logging in user
or logging in administrators
or that is one of the workflow you can say
for using for logging in purposes.
And you have catalogs, the product catalog,
you can say, and the APS related to those product catalogs,
let's say add a product or delete a product
or modify a product.
So workflows related to that specific product implementation.
Let's say you have delivery.
So let's say a user order a product
and you will be tracking the delivery
and marking whether the delivery has happened
or it just came around.
So workflows related to delivery aspects.
So we'll be having a headers,
each and every organization has its control
spread across different implementations,
whether that might be a specific product catalog
or whether that might be a product
or specific implementation aspects or a delivery aspect
or management of data behind the scenes
where customers are not easily interacting with,
I mean, customers even don't know
whether we are managing the data behind the scenes.
So for all these purposes,
we do have different different APS integrated with them
in order to communicate between system
and system communication,
various system and system communication.
I do mean, let's say wanted to have a couple
of web applications which are running,
doing a specific job,
whether that job might be front end user based applications.
Let's say, you know, user interface applications
like shopping applications or mobile applications
or these are applications
which are user interfacing applications user.
I mean, the end user or the customers
that contract with the R system
by the help of these applications.
And we have backdoor applications
where analytics people will work on them.
Let's say we do have a business and team
which mostly work on the data that is being produced
by the front end applications
and we'll be doing and gathering the insights
from those data.
So this is a backend application.
I mean, the end users, I mean, your end customers,
you, I mean, by the word you,
I mean, the company's end users or end customers
will not be interacting with this end application.
But this end application wants to interact
with the user interface application.
So this is like a typical scenario
where we'll be using MuleSoft.
So MuleSoft is a kind of middleman you can say,
which helps us integrating different applications
which are highly coupled when it comes to data
or communication is concerned.
So, I mean, so you're getting my point, right?
Exactly.
Yeah.
So the word translator is fine
as soon as it is on our human understanding.
But let's say, but MuleSoft needs to have a specific way
to encode and decode communication, right?
So that's where APS comes into picture.
So in order to say in a layman language,
basically it takes care of APIs
that is like inbound into MuleSoft
and it provides them the outbound capabilities
in order to have a couple of dumps
where it wanted to dump data
and the other applications will be fetching data
from there itself.
So that's where it all fits into the ecosystem.
So basically it unifies the data
for delivering a single view.
So let's say you wanted to have all these applications
that are managed in a single ecosystem
or single environment so that customers or builds
has a collective experience or unified experience.
And we don't want to kind of look out
for each and every automation process.
We don't want to look out for any other tool.
MuleSoft has it all tools integrated within itself
so that we don't need to go for and check out
for any other tool which is suitable
for our end user implementations.
So that is a deep introduction about MuleSoft
and how it fits in the whole ecosystem.
And when it comes to technology that MuleSoft is built on,
so right now for all the MuleSoft's developments
and deployments, they're done on top of Java.
So basically right now the latest version of MuleSoft
that is being implemented, it's on Java 17.
But initially the long-term support is there for Java 8.
So MuleSoft started its journey with Java 8
and it is being built on JVM implementations.
When I say JVM, to give you a brief introduction about JVM,
it's like Java Virtual Machine.
So whenever you create Java applications
or Java machines or Java enterprise servers
or anything which are written on Java,
they need to have a runtime environment
where they need to run, right?
So that is provided by Java runtime environment
and we do have Java development kit.
So basically in order to deploy a MuleSoft application
or in order to develop a MuleSoft application
or in order to test a MuleSoft application,
we need to have, compulsorily we need to have Java.
So it's like a prerequisite, you can say.
But we'll not be highly interacting with the Java
and it will be done by MuleSoft itself under the code.
So you will not have any interaction with Java
and there is no, not at all expectation
for you to understand Java or anything.
But if you want to have, you know,
hiring or, you know, hiring people based on MuleSoft,
having understanding on, a little bit on Java
and you will be giving them, you know,
a point to understand where things goes on,
how the control flow happens and how, you know,
the things goes from one point to other point
inside the MuleSoft.
So that's where it hits on whole ecosystem basically.
And, but how exactly things are built on MuleSoft.
So let's say, what is exactly the development environment
for MuleSoft?
So that is called MulePoint or MuleStudio basically
we call it.
And for MuleSoft, it's basically,
initially it's a kind of open source implementation tool
but later on Salesforce has taken that up.
Like initially Java is an open source tool
but later on Oracle takes that, right?
So similarly, MuleSoft is started its journey
as an open source tool and later on Salesforce has took that.
And initially we do have any point studio
as integrated development environment.
We call that ITG, right?
So initially we have that,
but later on when Salesforce took the MuleSoft,
it transformed its journey to any point studio platform.
So this is a kind of, if you are aware of Azure,
Microsoft Azure or AWS, are you aware of them?
Yes, so basically we do have a portal, right?
Where people go on, he just spin up the virtual machine
with the help of his credentials and bang,
things get delivered to him with the help of web interface.
So that's what this any point studio is.
Like to sum up, this basically any point studio
provides you much more similar to what Azure provides you.
But it is on Microsoft Azure, but this is on MuleSoft.
That's all.
That's pretty much the difference what we have.
So any point studio itself is a development environment
where developer comes on,
he creates connectors or he creates
like the pallets or canvas,
and he creates these Mule pallets
and integrate implementations
with the help of any point studio.
I'll be walking you through this any point studio
in later on sessions,
but that is kind of a brief introduction about that.
And how exactly it works basically MuleSoft.
So when we have different developers
or let's say you're working on such system
where you want to deliver your compute capabilities
in terms of APIs.
Let's say you are having that specific use case
and you wanted to have rate limits for your APIs
so that you wanted to deliver,
let's say you want for your clients
such that the clients are entitled to hit this API
only 2,500 times per day.
So you want to have this all complex mechanisms
built on top of your application
so that you don't want to maintain that,
maintain these custom integrations and all.
So these examples that I provided.
So basically you wanted to have great limit implementations
or you wanted to have a different API keys
where you want to onboard different different users
on a single tenant.
So that is one of the use case
where you want to integrate MuleSoft
so that you'll be delivering a single platform
to different users so that you can build out of them
or you can scale things much more easily
without any much headache over scaling and other aspects.
So MuleSoft takes care of all those things
which are built on top of API implementations.
So that's how MuleSoft sits in whole ecosystem.
So you have any questions till now?
Okay, cool.
So let's go our discussion to MuleSoft Anypoint.
As I've already told you,
so this MuleSoft Anypoint platform is like a single solution
where you can deliver things within a web application.
That's all.
You don't need to install anything.
You don't need to have,
so I've told you for Viewpoint Studio,
it's an IDE and you need to have all this setup
already done in your local environment.
So you need to have Java Virtual Machine installed.
You need to have Java Development Kit installed
and you need to have at least 10 gigs of RAM
inside your machine.
And you need to have at least 500 GB
of hard disk installed in your machine.
So these are like prerequisites
that you want to run MuleStudio.
But that would be difficult for administrator
who doesn't want to develop things,
but you want to have control over things, right?
So that's where this MuleSoft Anypoint comes into picture.
So it gives us, I mean, it gives a person an administrator
or a development person as well,
when I say development person,
a developer or integrating person or, you know,
API specialist.
When I say API specialist,
a person who controls the deployments of APIs
or a person who implements, you know, user onboarding
or those aspects of the MuleSoft implementation.
This will be the, I mean, this Anypoint will be like
a go-to solution for them
instead of having all the local setup done and all.
So this will be an easy solution.
And it provides you, like, let's say you have this
all packaged MuleSoft application
and you wanted to kind of run this specific package
MuleSoft application, which is delivered by your developers
and you wanted to test it out.
So let's say you have different, different
projects inside, I mean, teams inside your project
and you have developers team, you have your testers team
and you have your deployment team.
So you wanted to have this difference of responsibilities
within each and every person.
But let's say developers developed an implementation
or a MuleSoft application or an application code
and they, you know, build that code
and exported that code as a part of JAR artifact.
And you wanted to gather this specific JAR artifact
and provide this JAR artifact to the testers team
where you want to test the applications.
So they don't need to have any implementation details
where developers worked on.
So it's like difference of implementations
or difference between, you wanted to have varied
responsibilities within each and every team
but you wanted to keep them integrated.
So on those scenarios, MuleSoft
any point comes into picture.
So you got, right, so basically
different, different implementation teams
but they wanted to work on the same project
parallelly in those scenarios.
MuleSoft any point team, I mean,
MuleSoft any point application comes into picture
but we don't have that luxury with any point studio.
Any point studio is like local runtime environment
you can say, where things will be easy for local development
and local testing.
But that's not the way the current organizations
or current businesses work on.
It's like current businesses should have,
things should be going on to production within it.
Once it is developed and once it is tested,
we don't want to have big bank releases, right?
So that is the thing.
And we also have this unit testing framework
inside MuleSoft any point.
So I'll be showing you all of this.
Feel free to stalk me if you think
anything is like lagging or, you know,
you don't want to understand yet.
So when we want to navigate quickly
between any point platform.
So we have this couple of things here.
So you can see right, any point code builder.
So it's basically, it's a new feature right now
inside any point platform.
It's on beta stage right now.
So I don't think it's better to discuss this right now
but I'll give you a brief idea over this.
So right now, as I have told you, right,
so this any point studio, it has a couple of, not couple,
it has a lot of downsides right now
and it has a lot of cons rather than any point platform.
So this any point code builder,
it's like an integration tool that is being developed
inside the any point platform itself
as a separate implementation or separate.
Right now, if you see here,
it's the API manager itself has a different route.
So whatever you're doing inside this API manager,
it's on the same any point platform
that look and feel will be the same
and single sign on will be the same.
You don't need to have different credentials
to work on and all.
So similarly, the any point code builder
is like a tight integration with the any point platform.
You can say there, we don't need to use
any point studio anymore.
A developer can do workflows designing
and workflows implementations or workflows testing
and giving mocking requests to the workflow and all
with the help of just code builder.
We don't need to learn about any point studio anymore.
Once this is, I mean, the development of code builder
is completed and people are good to use this code builder.
So yeah, this is the API manager.
So you could able to see which screen right now.
Good, yeah.
So this API manager, it's like go to solution
where you want to create an API and deploy that API.
For deployment, we have this any point
exchange or any point runtime manager
where we'll be deploying our APIs.
But when it comes to designing an API,
so let's say I have this product API already created.
You can see the metrics that the specific API
that is being deployed over here.
So let's say you wanted to see a specific implementation.
Let's say, so this any point API platform,
like this API administrator provides us tools
to choose the runtime environment.
Here we are using Null4 as our runtime.
So I have developed this specific product API
with the help of Null4.
I'll show that how we'll be doing that
inside the any point studio in later classes.
And the version that we are using
and the instance that we deployed,
the specific API and what's the error that we got
and how many test requests that it is being hit.
So right now it's like no one has access to this.
Let's say I push this into production environment
and things goes wrong in few scenarios.
So in those scenarios, we'll be having metrics
and policies that we want to enforce
inside this specific implementation.
So that's how this API manager works.
We'll be going through this in later,
like the detailed things in later classes.
And when it comes to API governance,
so this API governance basically provides us
a couple of policies where we want to define these policies.
Let's say you're working on some European based projects
and in European based projects,
you'll be having a strict data management policies
where you want it to keep specific customers data
for only a specific period of time.
And you want it to erase that completely,
the hard erasing you can say, not the soft erasing and all.
So you have to make sure a platform or application
that is being developed on top of this platform
complying to all these policies that is being pushed
by government organizations or agencies
will be using this API governance.
So basically it's a kind of policy management platform,
you can say in a simple word so that you can create policies
and enforce those policies on a platform
that you are working on basically.
So that is the API governance.
So basically this will be used for mature projects,
not for sandbox environment projects
or not for development projects
or not for testing environments.
Basically it's for already deployed projects
and they are in a production load,
production like systems and the people who are using
these projects or platform is also matters
for this specific API governance.
I mean, if you want to use this API governance,
then people from where they work are using this application
also matters basically for this.
So it's like more mature projects
and this runtime manager, as I've told you, right?
So let's say you have developed your application,
new application we call applications
that are developed inside MuleSoft as new applications.
So let's say you means your developers worked on MuleSoft
and they created a separate artifact,
which is a result of a development effort that they have put.
So you wanted to deploy this artifact.
So you in the sense a person who manages
the instance of the application.
So basically the job roles inside this MuleSoft,
I mean, in this MuleSoft ecosystem,
they revolve around this is kind of a tough topic,
but yeah, if you wanted to understand like how things goes on
in huge organizations where the separation
of concern is present, right?
So developers works on only API implementations
and testing team will be working on only the testing teams
and deployment or set reliability engineers.
They work mostly on creating continuous integrations
and continuous deployments for the Mule applications
and making sure that the uptime metrics are passing for them.
So basically this runtime manager is mostly used
by development guys or what we call set reliability engineers.
So those people mostly use this runtime manager.
So we have this couple of pains here.
So basically each and every pain responsible
for its own deployment aspects.
So let's say I show you.
So you have this test application
and it is deployed under Cloud Hub.
So basically Cloud Hub is a one-stop solution
for your deployment effort inside any point platform.
So it's like we have this separate session
to talk about how to deploy things inside Cloud Hub,
but it's a kind of target cloud you can say,
so that you can just deploy this
and share the URL with the others
so that any person who is on Cloud can just utilize this.
I mean, any person who is on globe, you just utilize that
and consume this API with the help of credentials
that you have provided and the URL that you have provided.
So basically the deployment model,
I mean, we can also define the deployment model here.
So we have this couple of deployment models
like a Cloud Hub deployment model.
It is one kind of deployment model.
Or let's say you have your own servers
like the on-premise servers,
and you want to maintain all your networking related concepts
over there or the network proxies
and compute resources that you have inside your on-premise.
So basically you want it to have
your own user-managed platform,
but you want it to integrate this on-premise
to the runtime manager.
So let's say you have five or six APIs
that is being delivered to your clients
and you don't want to deploy all these five solutions,
but you wanted to manage all these five solutions
inside Anypoint platform so that you have a holistic,
you know, I mean, just you wanted to use for metrics purpose,
this Anypoint platform, but for deployment purpose
and for storage purpose,
you wanted to use your on-premise environments.
You can just integrate this any deployment model
and connect your on-premise servers to this Anypoint platform
and you can just utilize that.
So you got my point, right?
So you can just grab the things that are being deployed
in Open, like your on-premise servers
and just it's a kind of lift and shift mechanism.
You wanted to manage everything under a single head.
So that's how it works.
And you also have this flex gateways basically.
This flex gateways are like,
you have your application deployed inside this cloud hub
and you don't want to provide your URL to the end users.
So let's say you're having a price model,
let's say you wanted to serve this specific API
for only those clients who have subscribed for your subscription
you wanted to have more secure way of providing this,
I mean delivering this API.
So in those scenarios, you will be deploying a single tenant
and just separating each and every user inside this tenant.
So you don't want to have a single URL
and just distribute this URL to each and everyone
because that provides security to the application.
So you wanted to have separation of concern
in terms of gateways or
the deployments that you have done
so that you'll be having,
I mean, you just deploy a specific gateway
and let's say you have deployed it
in a specific Kubernetes cluster
and for each and every customer,
you will be creating a new endpoint
so that you'll be delivering this only new endpoint
for that customer.
And whenever let's say the customer
don't want to continue his subscription,
you'll be having like granular control
over what this customer can do inside your application.
So basically it's like kind of limitation of,
I mean, providing limitations for deliveries
for the API access basically.
You don't want to provide access.
I mean, you want to stop the access
for those API endpoints which you have created.
So basically we'll be using these gateways
or what we call flexible gateways.
Basically these are managed by flex gateways.
This is a new concept in runtime manager
which is in endpoint platform.
So that's one of the use case what we call
and allows and all.
So these are for production systems where
let's say you have these servers basically.
I did not explain this basically.
So you have these servers,
let's say you wanted to create a server
and you wanted to provision this server,
let's say five gigs of RAM
and there is a lot of traffic that is being provided
to your API in huge top notch conditions.
Let's say there is a sale going on for your application
and there are lots of hits for your application
and you do not sign up for any auto scaling
implementations for your servers.
So in those scenarios, you don't want to do 500x responses
for your clients.
So that would be having bad implications
or bad user experience for the clients
so that you can just sign up for these alerts
and these alerts basically provides you whenever there are,
I mean the actual threshold is being made for your servers
then you'll be having these alerts
to a specific implementation aspects.
Let's say you have provided your mobile phone
as one of the alerts or having a call or having an SMS
or having a Slack notification or email notification,
anything, we do have lots of DHCP configurations as well.
So let's say you have your own organizations email,
you don't want to integrate that with the Salesforce
because of any security reasons,
you do have a couple of DHCP integrations for this alert.
It's a kind of very flexible system
compared to other API systems like Google APG
or those platforms which are like rivalries
for the use of implementations.
And these are VPCs, these are like out of point
for discussion right now.
These are like more granular
virtual private cloud basically.
These will provide you more granular access
over which applications are being deployed,
what is the mechanism of networking
that is being utilized and all.
So I don't think it's much required,
but you know the load balancers, right?
So basically a typical load balancer provides you
a kind of load balancing between different instances.
Let's say you have opted for horizontal auto scaling
in those scenarios, new service will be added
to your deployment and you wanted to auto scale them
but you don't have any single point of contact.
Let's say each and every server has its own IP address
associated with that, but we don't want to provide
each and every server to different, different users.
That would be a different task and we want to,
I mean that would become very difficult to maintain
in long run as well.
Let's say you have 50 or 60 servers
and you don't want to manage all of them manually,
then load balancers comes into picture.
So that is the thing.
So it's a kind of whole overview on the runtime manager.
And do you have any questions still now?
And this is a visualizer.
Visualizer is a kind of application platform
where you want to visualize how your network aspects
are implemented.
Let's say you have created a VPC and you have deployed
tens or tens number of servers.
Let's say you have deployed around 25 or 30 servers
and you wanted to understand how these communication
takes place.
What are the user facing servers and what are backend servers
which are accessible for only for our people
who are directly interacting with our VPN being connected
to the organizations VPN.
So it provides us a kind of simplified dashboards
where we wanted to integrate our specific applications
and like management aspects basically
this visualizer provides us.
And this monitoring tools, so basically let's say
you have deployed a specific application
inside the MuleSoft and you wanted to monitor
that specific implementation.
Let's say we do have this matrix and alerts.
So let's say you have, I mean, you've been using,
here we have lots of servers as you have seen earlier.
We have this lots of servers over here.
If you want to add a server, you have a different way
to add a server basically.
We have Kubernetes server and we have OpenShift server.
We have Linux server and a lot more servers.
So let's say for your use cases,
for one of your application,
you'll be going for Kubernetes clusters.
And for one of your applications, the architecture itself
supports the next basic implementation.
So like Kubernetes has its own matrix management system.
You will be using like Prometheus for Kubernetes
and for Linux based systems, you will be using
like Elk Stack, all those things.
So, but these are like different, different platforms
and you have for Kubernetes server,
you have to want to go with Prometheus dashboards
and for Linux servers, you want to go
for different dashboards and managing all of them
in different, different areas becomes
like lots of tiresome management, right?
So that's where this built-in dashboards comes into picture.
So it's like, we wanted to have our own tailored dashboard
in those scenarios, we'll be creating these dashboards,
like we'll be providing the metrics,
what metrics that we want to provide.
So whenever we have deployed a specific application,
like next class, like next sessions,
so we'll have this everything populated over here
so that we can have hands on over there.
So this is like where we wanted to monitor everything
in a single dashboard in different,
different deployment aspects.
We'll be using this in that monitoring.
So this is this and access management.
Like this is one of the most important thing
in the whole Anypoint implementation.
So here, let's say your organization wanted to go
for Anypoint implementation and you have, let's say,
around 10 applications that we wanted to integrate
with the help of the CPAs, Anypoint, specifically.
And here you have like around 15 developers
and five distinct team and two SRE team.
So each and every team has its own boundaries,
where they wanted to work with Anypoint platform.
For developers, they mostly wanted to have access
to specific implementation,
like only running the application in the local machine
and just having or debugging implementations over there.
So those aspects, we wanted to kind of restrict their access
over there itself.
But let's say for testing users,
we want to have different mock APS as well,
because testers mostly use on mock servers, basically.
So in those scenarios, we need to provide access
to testers for there as well.
For deployment or SRE team,
or they need to have access over the physical resources
or servers that we have deployed on different clouds.
If you wanted to integrate the Microsoft Azure here,
so you will be having an API key
that is being created by Microsoft Azure.
And you want to integrate Azure to Anypoint platform
or Salesforce to Anypoint platform
or Google APG platform to Anypoint platform.
So different, different scenarios SRE team works on.
So they need to work on these specific scenarios.
So we do have role-based access controls here.
And we have broad roles and specific organization
can create their custom roles also
to provide more granular roles and responsibilities
to a single user so that they can just tailor their,
access needs so that it's like not one size fit
for everyone.
So that's where this access management dashboard
comes into picture.
And as I've told for you,
let's say you wanted to integrate Salesforce
with Anypoint platform.
So, but how this Anypoint platform
knows that who are you,
we want to work with your organization's cloud account.
So you wanted to provide SSH key based authentication
mechanism or API key based authentication mechanism.
So you wanted to store these keys somewhere else.
So this secret manager provides you that luxury
to store all the SSH keys or SSL certificates
or public keys or API keys and all.
And we have this secret manager integrated
with the Anypoint studio as well.
So I've told like this Anypoint studio,
it's like local development mechanism
where developers will be working on a specific ID itself.
If you're aware of Java development,
we have this Spring Boot IDEs
and these IDEs, right?
What we call them,
file charm or these IDEs.
So they are like integrated development environments
where you can do testing, you can do debugging,
you can do development all at a single place.
So you wanted to integrate this Anypoint platform
to those third party applications or third party APIs.
You can just use the secret manager.
This secret manager has hooks associated with that.
So basically you can just create a hook
and integrate that hook
inside your local machines environment variable
and you can just have access to those
all third party APIs as well
with the help of the secret manager.
So that is the overview of this Anypoint platform.
And you also have this exchange over here.
So exchange is like a public platform for a user.
Are you aware of Docker variations?
Docker?
Okay, so basically Docker is a kind of
container and shipping platform basically.
Let's say you develop the code
inside your implemented, I mean,
inside your local machine
and you wanted to deploy this code somewhere else,
which is very different from your development environment.
So you'll be creating artifact
and shipping those artifact to other areas
where you want to deploy,
whether that may be Kubernetes cluster,
whether that may be a Linux,
a traditional Linux machine or any other things.
So, but you wanted to have, I mean,
this jar artifact published somewhere
and all users who wants to use this jar artifact
just go to their marketplace
and fetch that specific jar artifact
and use for their own purposes.
So that's how the exchange works.
Basically it's a kind of marketplace
where you publish things and other people,
utilize those already pre-developed APIs
or connectors or templates.
So that's how the exchange works.
We'll be going through this exchange and all.
The design center is also,
we'll be going through raml implementations.
So that's where this design center comes into picture.
So that's the brief introduction about how MuleSoft,
kind of,
it's a lot.
So you can stop me anywhere if you think things are like,
not necessary or you don't want to have them.
Okay.
So these are these things that I've already covered.
So basically this API manager,
you can just write our raml codes right away
and we can just create our URLs
and test those URLs as well with the help of API manager.
We'll be doing that in subsequent hours.
And runtime manager, it's like specifically,
let's say for cloud implementations,
which they call Azure cloud implementations.
We have specific Java based runtime managers.
And for Python based API integrations,
we have a Python runtime managers.
So you want it to run or you want it to integrate
different runtime implementations,
let's say JavaScript or Java or Python based code.
Basically we'll be using that with the help of Weaves.
Weave is a kind of data transmission language
that we'll be using in MuleSoft.
We'll be discussing that later.
But yeah, those Weave implementations
requires our runtime managers so that we can define
what exactly the language that we're using
and we can integrate that.
So it's like multimodal integrations
inside the MuleSoft and MQ.
So basically this MQ is like,
if you know Apache Kafka or cloud native implementations
like Azure Service Bus or AWS SNS
or those Service Bus mechanisms.
So basically these MQs or messaging queues
are basically like you'll be publishing a message
and API which is subscribed to this message
will be continuously pulling over the queue
that you have subscribed
and continuously listening over there
so that we can integrate publisher and subscriber
subscriber mechanism with the actual deliveries
that we are going to work on.
So basically it's like kind of architectural decision
that we have made to opt for this publisher
and subscriber mechanism and use of integration tool.
Like this is a kind of use of integration tool itself.
We call that as messaging queue
and it is tightly coupled with the MuleSoft
so that it has it's, I mean, this MQ has it's
cloud native implementation of MuleSoft.
When I say cloud native,
you don't need to go to any other specific cloud
or manage it somewhere else.
You have, I mean, you can just manage it
under the same roof of any point.
So that is the thing.
So yeah, that's all about any point.
As we go through the practical things
so you can just correlate things
what I've told right now much more easily.
And like till now I've told API, API, API
a lot many times, right?
So what is this specific API?
Like how exactly this API comes into picture
with respect to MuleSoft language.
So basically this API,
do you have any like
previous experience with the development
or those scenarios or software development areas?
Okay, good, okay.
So basically API is a kind of
application programming interface.
We call that is the
short form of API.
So this API is basically, I mean,
let's say you have three applications
and these three applications need to communicate
between themselves.
Right now we are using English
as our language of communication.
But our programs doesn't know about English, right?
So they wanted to have a specific language
where they wanted to communicate
within each and everything, I mean, with each other.
So that's what an API is.
So API has its own rules and like a hard-coded rules
where if you wanted to get some information
from a specific deployed application,
you want to query that first.
I want this information from you.
You want, I mean, you should give me this information.
That is the query you pass
and it responds to you in a specific way
as a JSON response.
JSON is JavaScript object notation.
So basically, let's say I'll queue you in terms of,
you have this request
and you have this response provided by API.
So this is the API which works
between the request and response.
So how do you provide request is one thing, one question.
You have to have solution for that
and how it provides the response.
We need to have information regarding that as well.
Let's say it responded to you in Arabic.
You can't even understand, right?
So there should be pre-understand convention
between a caller and a provider.
So basically that pre-understanding convention
is called an API.
Basically you have a specific API specifications
like I'll be providing you a request in specific form
and you have to listen to this request
and you have to provide response to me
in this specific form.
So that is the way an API works basically.
So it consists of all these requests
and response based scenarios.
But how this, I mean, API sits with Nullsoft basically.
As we have already discussed,
Nullsoft is kind of integration platform,
first and foremost thing.
It provides us a way to integrate
different applications which are deployed
across different environment.
And sometimes we don't even know where,
I mean, it is kind of abstracted from us
where a specific application is deployed.
But as long as you are hitting that application,
it is providing you response
with the help of API keys and all.
So that's where, let's say three developers worked on
one of the implementations.
So let's say you have a vendor with you
and that vendor takes the responsibility
of delivering the API.
And you wanted to build an application on top of this API
and you wanted to deliver this as a kind of product
to your clients.
So you have a dependency with one of the teams.
And that team continuously deploys and delivers
everything like right now,
this week you have API version V1
and going forward on next week,
they deployed API version V2.
But right now your application mostly works for API V1
and you don't know what exactly the conventions
that are changed for API for V4, sorry, 1.4.
So let's say, as you, I mean,
if you know how API convention works,
basically, let's say the version changes happens
and the predefined convention also changes
with those specific version changes.
And you have to understand what exactly changed,
are there any breaking changes over there and all.
So like when it comes to version changes,
managing them and maintaining the version changes
becomes a lot of help for us
when we manage them like standalone implementations.
So that's where this MulePoint sits on.
So for all these API conventions,
this is like being abstracted by MuleSoft itself.
So basically this area is where you integrate the MuleSoft
so that you can have integration with the dependency teams
and the product that you're delivering to your customers
so that you have your fine-grained control
over which API version you use and all.
So that's where it's like the upstream end utilizers
where end product is delivered
are connected with the help of MuleSoft.
I guess that makes sense.
So till now we discussed about what is API,
what is API and all, but this is a kind of broad definition.
We haven't gone till into specifications
how this API is designed,
how this API actually created and all.
So that's where RAML language comes into picture.
So as I have told,
a pre-understand or predefined convention.
So what is the specific predefined convention?
So that is RAML.
So basically RAML is a kind of universal language
you can say in terms of MuleSoft.
And we have other applications as well,
which are created on top of RAML language
that is out of topic right now.
So, but yeah, RAML is a language people use
to create their APIs and just publish their APIs
inside this delivery exchange
so that people can just grab those RAML specifications
and build their applications
or build their integrations on top of that.
So that's where RAML comes into picture.
So whenever, I mean,
I mean, till now I'm telling implementation,
implementation and implementation.
So what is this specific implementation?
So we want to define a way endpoint response.
So let's say I have created a endpoint
called products endpoint.
And basically this product endpoint
is mostly responsible for providing you
a list of products, like you have a catalog with you
and you have this catalog says that you have product one,
ETNR.
So you wanted to tell your application programmers
that I have this products with me, but how can you tell?
So you'll be implementing a product API.
And this product API is like responsible to throw
what is the catalog that you have.
So whenever let's say a developer hits this product API,
he'll get the list of products.
No matter your implementation has changed a lot.
Let's say you have migrated from Java based implementation
to Python based servers.
And a lot of your architecture has changed.
You don't care about any of those.
You just tell your integration team
that let's say you wanted to have list of products,
just hit this.
You don't need to care about what exactly,
how exactly it is implemented.
So that's where an API raml language implementation
comes into picture.
So it's basically you hit this, I'll give you this.
So that's where it works.
So we'll be defining what exactly things are going to be.
Like how you handle the incoming request inside your API
so that you can process the API and interact with this API
to our backend systems.
So let's say you have backend database connected
to your implementing application and you wanted to level.
So let's say right now this is a get request.
So if you have an idea on how HTTP request works
in software based implementations,
we call that as restful implementations.
So this restful implementations provides you a picture
that let's say no matter what you've done,
the systems state before your request
and after your request should be same.
So let's say I wanted to post a product to your catalog.
You can't just go to database by yourself
and push the product into that.
So you wanted to have an API with you.
We call that as a post API or put API.
So basically post API or put API,
you can stop me if you don't want to listen to this
because this is like away from use of this.
So exactly, if you wanted to understand
use of implementations, we have to understand this terminology.
This is the kind of prerequisite.
So post input, it's a kind of request
that you provide to the application in terms of API
level, not a database level.
So for the three layer implementations,
first we'll be having user level where end user directly
interacts with the application and we have business logic
and database layer.
And in this whole ecosystem, use of comes in business layer.
I mean business logic layer.
But let's say you wanted to create a catalog
or add an item to the catalog, you'll
be having post request with you.
So the user pushes a post request.
It goes to the business logic layer
and you'll be defining what exactly
you need to do in terms of how the request to respond.
It requires internal implementations
about what database that you use.
I use Post SQL instances or use MongoDB or use CosmoDB database.
That user doesn't need to care about.
So basically, he just wanted to push that
and it should be reflecting into database.
So that's how put and post work.
And we also have this get as well.
And we have this patch.
We'll be discussing the implementations later
with the help of core examples.
And we have this get APIs and delete APIs and then post.
So these are preliminary APIs.
What one user can integrate or respond
to a business logic implementations.
So that's how we'll be designing RAML implementations.
So that's the thing.
And that is one of the use case of RAML implementations.
And when it comes to RAML management,
it's not about just providing implementation one time
and your job is done.
It's not like that.
So we have to maintain that specific implementation as well.
So let's say you have a written implementation.
You need to provide a documentation for that.
So mostly, use of developers are the middlemen.
So they have stream developers as well.
They have downstream application users as well
who will be consuming these APIs.
So you need to have documentation with you,
which is like kind of well described documentation.
So as the RAML developers, we are
responsible for delivering this management of,
I mean, the documentation management for this as well.
And we have to manage the lifecycle of APIs.
So this is a kind of most important thing
when it comes to RAML based implementations.
So we have this expiration and non expiration for APIs.
So basically, APIs will be retiring on some time later.
So we have this convention for APIs in its whole lifecycle.
That's mostly large applications or businesses
who have growth associated with them.
Their APIs will be looking like this.
It's a kind of development tradition you can call as.
And then for that might be catalog.
Or so even describes to the portion of API that you have.
Let's say you wanted to move to V2 API
and wanted to duplicate all the V1 APIs that were provided
across to your clients or to the downstream implementations
or those scenarios.
You can just deploy these V1 endpoints
in a separate deployment aspect.
And again, that's an architecture decision
that needs to be decided at the starting point
of implementation.
But yeah, this is the way APIs are organized in MuleSoft.
And API here, when I say business, that might be,
let's say you have a whole lot of retail systems.
Or let's say you have maybe data warehousing,
business logic application as well.
So you'll be creating after API slash,
you'll be providing your business specific application
to which these APIs are pointing towards.
So you got it right.
So basically these are specifications, what we call.
This comes under a management layer.
So basically, I mean, we'll be managing the lifecycle
of the APIs as well.
After one or two years, we'll be duplicating V1 API
and providing more sophisticated APIs
with the help of V2 version that we release
across to all the clients.
So that's where we'll be using Rammel.
So Rammel had its own specification,
I mean, the mechanism where in which management
can come into picture.
So we have these different layers inside the Rammel code
that we wanted to manage our lifecycle as well.
So that is the thing.
So once we, after this slide, we'll
be going into actual Rammel code as well
so that we can have a catch on implementation experience
as well.
And once all development is completed,
so we wanted to, I mean, we have developed the whole application
and we want to manage that right now.
We can just provide this Rammel specification
to your downstream developers who
are working on top of your integrating tool here,
in this case, MuleSoft.
So for them, you wanted to expose
this specific Rammel code.
But you can't just expose that, like creating or sending
an email, something like that.
We wanted to have a sophisticated or long term
mechanism where we wanted to provide exporting or sharing
of APIs between different, different implementation
aspects.
So that's where Anypoint Exchange comes into picture.
Here, this layer itself is abstracted right now,
but this specific exchange is more specific to Anypoint
platform.
Let's say you wanted to have your own sharing mechanism.
In that scenario, you can just opt for a third party sharing
mechanism as well.
There are a lot of solutions that
are there inside the market right now
who provide solutions on the API exchanges as well, where
you wanted to publish your APIs and they have more features
on that as well, which is not exposed to the internet.
So you wanted to have closed deployments for these exchange
servers and all.
So that's where this exchange comes into picture.
But this exchange is more tightly coupled
with the API, Anypoint platform.
But it is not coupled with the Rammel language specification.
Rammel is basically more open point or open API
specification following.
So it's not tightly coupled with Anypoint.
And then finally, consumption.
Once you expose APIs to the end user,
it's not about just exposing them or publishing them.
And they will be using that.
So you wanted to provide them a consuming capabilities.
Like there will be different applications or systems that
needs to interact with these exposed services.
So let's say the consumers can use this API
that we have deployed with the help of already developed
Rammel code so that they can just hit this API
and get the details.
And with the help of those details,
they can just build a UI applications and all.
So this like MuleSoft provides us different API tools
so that consumers can discover this
and consumers can integrate it with their own APIs
effectively.
So this Rammel provides that specific integration.
So that's what an API in terms of MuleSoft implementations.
So for all these definition, implementation, and management,
Anypoint and MuleSoft has its own tightly coupled solution
for them.
That's what we'll be going to discuss right now.
That is Rammel.
So as we have already, this is kind of redundant one.
So if you know about YAML based implementations,
you have any basic idea about the YAML format?
Do you have it?
So YAML is basically a serialization data
serialization format.
So we have this JSON format.
JSON format uses the Calibri syntax.
So let's say here in JSON, we have key value mechanisms.
So we have a key and the associated value over there,
separated with colon, basically.
And that's how almost all APIs work till now.
Even MuleSoft APIs are built with the help
of this specific JSON based data serialization format.
When I say data serialization format,
let's say you wanted to fetch some data from database
and broadcast this data to a lot of users via this API.
So that's how you'll be providing data.
The response of a specific API should be in a format, right?
That format is called JSON format.
And you have JSON inside JSON.
So let's say multi-layer key.
And you can have another layer of response over here.
And each and every layer should be separated with a comma
over here.
And you can also have arrays also.
So let's say, and you can provide comma separated values
inside the array.
So that's how it actually works.
So it is JSON format, not ML format.
But in order to create JSON responses
or utilize the JSON responses, you
have to create a specification, an API specification,
MuleSoft API specification.
That way to create a specification
is with the help of YAML.
So when I say YAML, it's basically a format of data.
So YAML basically uses its white space-sensitive data
mechanism, I mean data implementation mechanism.
So let's say you have key.
And it should be separated with a space after colon.
And then value.
You don't have the burden to maintain
all these curly braces and codes and all,
which is very tiresome to type and tiresome to maintain.
So this YAML is easy for humans to understand and act upon.
And similarly, you have this key too,
and the multi-layered YAML-based keys.
So for this multi-layered YAML-based keys,
front space should be there.
So that's how this works.
And for arrays, we have separate syntax in YAML.
So let's say this is a pipe, which is on top of Enter.
And we can just provide a list here, a multi-line command
also.
But YAML doesn't provide you support for multi-line commands.
In order to tell YAML that, OK, I do have an array over here,
and I wanted to provide multiple number of layers over here,
I mean multiple number of keys or values over here,
so that YAML understands that this pipe symbol is responsible
for that.
So you will be providing value 1 and then value 2.
So the white space should be most important here,
so that it understands where this array stops
and what is the next key over here.
So you can see the differences between this white space
and key 3.
And this is one of the ways where you provide an array.
And the other way is to have like minus symbols, basically.
This is more user-friendly one.
So that's how a basic YAML syntax works.
Maybe while we go on while developing this YAML
or YAML-based things, it will be easy for us
to get things going on.
So that's how it uses.
So basically, it uses YAML-based mechanisms.
And apart from that, how this API design will be doing.
So if you understand REST APIs, so basically, REST APIs
provides, as I have already told you, this REST APIs.
Basically, whatever the state of system before calling an API
and the state of system after the calling an API,
it should be safe.
That is the hard rule for this REST APIs, basically.
So you wanted to implement APIs that
is binding to that REST API specifications.
So that's first and foremost important thing.
In those REST APIs, we have resources, methods, requests,
and response schemas.
So we'll be going over them right now.
So it requires much more deep dive
into API-based implementations.
So feel free to stop me if you don't understand.
And method.
So method is basically like an REST API method.
So it has, as I've already told, these are REST API methods.
You can just use these methods to fetch things
with the help of API so that user can understand
what this API does, whether a user needs
to push data into a server with the help of post API,
or put API, or user wants to delete a specific entry
inside the database with the help of delete API.
Patch API basically does a modification of a resource
inside the database system.
So that's what a patch API is responsible for.
And get API is responsible for getting
the data from the actual system.
Webinance system, it's database system,
which sits on the endmost layer of the MuleSoft implementations.
For MuleSoft, we'll be interacting with the database.
So we'll be pushing these API methods
to BusinessLogic layer, which takes care of MuleSoft.
And then at the end, the database
is the one which persistently stores all the data
and serves us basically.
So that's how it works.
That is API method basically.
And we wanted to provide a payload to a method.
So let's say a forget request, you
don't need to provide a payload.
But for let's say patch request or post request,
you need to provide some kind of payload for that request.
So what this payload exactly is that you wanted to have,
I mean, you have created a product implementation.
And for this product, a raml implementation,
you wanted to create a new product for your catalog.
So you don't want to go to database
and load database query languages
and interact with the database.
You just kind of created API and provided that to your consumers.
And whatever the product catalog details regarding
product like the product name, if you can see here,
we'll be using JSON syntax over here.
So that's how API specification works basically.
It describes us what are the fields
that you have to provide.
So API specification provides what
are the fields that you need to provide inside the payload.
Here inside the payload, I provided product name.
That is the compulsory or the mandatory field
that you have to provide.
You can also design your API to provide you, I mean,
so that you can have optional fields inside your application
and all.
So optional field one.
At the end, we'll be having a serialization closing
syntax, which is the flower bracket close.
So this is a payload.
So we'll be passing this specific payload
to the API for post request or for delete request or batch
request.
So that's how it works basically.
When you wanted to create a product,
you will be passing this, you in this sense, end user.
But what is your task over here?
Want to design how a payload should look like.
So that is the end goal of us to understand how this all works.
So basically, we'll be designing how a method should be,
how a request should be, how a payload should be,
how a response should be.
That's how we'll be designing API specification.
And when it comes to headers, so let's say
you have implemented an authorization mechanism
for your application.
And this authorization mechanism uses OAuth 2 as a standard
authorization mechanism.
And you have access control, role-based access control,
and private keys, public keys mechanisms,
and JWT tokens as one of the mechanisms.
So all these things, whenever, let's say,
a person makes a request to the client,
a client makes a request to the server,
he needs to pass his user identification
so that the server can make sure to validate him
to understand who he is basically.
So in those scenarios, we can't just
provide them as a payload because payload is basically
can be, I mean, they should be transmitted
over the network across different backbone layers
or backbone networks where users doesn't have any control over.
So headers comes into picture for that.
So basically, headers consists of all this sensitive data,
like the authentication and authorizations,
implementations, or public and private keys,
or let's say access control mechanism, like access keys,
or refresh tokens, or JWT tokens.
So these are a couple of ways to interact or ways
to authenticate between different API
implementations.
So that's how the headers work.
And user can also create custom headers also.
So what is the structure of a custom header?
You wanted to describe that with the help of your MuleSoft
raml implementation.
So that's how you'll be defining what are the required headers
and what is the syntax of those headers
you wanted to pass and all.
So that's how it works basically.
And then the response schemas.
This is one of the most important things
that we wanted to understand about implementation.
Let's say I'll be just pasting you a presentation example.
Till now I've described you the response, response,
response, and write.
So what exactly is this response?
So if you go over here, this is, by the way,
this is a sample or raml file.
So basically, it consists of what is the actual end
users that this has.
For this, we have retail products
that we have the list of products a specific catalog has
and the retail tags on it as well.
So first, let's just go through these retail products
as to understand what exactly the product response looks
like.
So if you see over here, it is created
with the help of a raml-based syntax
that I have described earlier.
So inside this user's key, we do have different, basically,
subkeys.
Subkeys and subkeys, we have the values, actually,
their corresponding values.
And here we are using templates.
In terms of raml, we have a lot of templates over here.
So let's say here we are using Exchange Modules
as one of the templates.
So this is the file system implementation.
And if you go here, the path will be describing,
we'll be describing a path over here, the Exchange Modules.
And inside the Exchange Modules and core.gmule templates.
And then we'll be going through this retail product.
And here, the retail product of raml.
So we'll be describing a whole rules and implementations,
where or how should the request should be, all those things.
So we are just bringing all these implementations
into a separate file so that we can just bring this own file
here itself.
So once you provided a URL, I mean, the path over here,
everything, whatever, there in that specific file,
they'll be just copied and pasted here.
It's like kind of modularizing things
so that you can just reuse those modules
or reuse those implementations wherever you want.
So that's how it works.
You will be utilizing that specification over here.
But how exactly is that being designed?
Let's just go through that.
So if you see over here, types of products.
So for retail API, we have lots of products, basically.
One of those products, there is no hard rule
that each and every product has its own type,
as it should be the same type.
They might be having different types as well.
So that's why for type of the product,
we have different identifiers.
So let's say, I mean, inside the types attribute,
we have the identifier, the ID of the type.
We'll be defining a type name or unique identifier
for a product over here and mapping this unique identifier
to a description.
So let's say for an example, it would be very easy
if we had examples, actually.
And we have different types like food, clothes, medicine.
So for a layman user, this is fine.
But for the application, you need
to have type as an attribute for a product.
And for this type, you'll be having two different attributes,
like identifier, which is a unique ID, basically.
Let's say D23 or D2345.
So this is a unique identifier.
A user can't understand this specific thing.
So we'll be defining a description as well for that.
So this description provides a one-to-one mapping
to a product to its type.
Let's say D2345 corresponds to a specific food product.
So here, I have provided you.
I mean, here, I have just provided description as a food.
But that might be anything.
That might be 12345.
But you don't want to use it to provide his own things
or his own specifications, such as hash, hash, hash,
or dollar, dollar, dollar.
Or this might have some kind of input
and crash the system anyway.
So in order to restrict them, so we'll
be providing rules for our APS specifications.
So those will be the patterns that we
wanted to follow, or minimum length or maximum length.
If you see a sign-up forms, let's say
you have provided a dummy email without any entry
and symbol or any of those things,
you'll be getting an error set.
So that's how these are controlled over here.
So that's how it works.
So the type corresponds to the identifier and identifier
mapping.
So that is the way, for example.
And we have product characteristics, like an object.
Here you can see that I've told them multi-line strings.
Basically, these are multi-line strings in the sense,
character arrays, basically.
And for properties, we do have code as a property.
We do have a name as a property and description as a property.
So for product characteristics, so let's say
I'm just translating this APS specification
into user-friendly scenarios to understand it.
So let's say each and every.
So this is type is specific to, and again,
this is business-specific implementation.
Right now, we have taken an example of a product catalog.
That might be anything specific to the project.
So the product characteristics comes
with different attributes, like the code attribute,
and then a name attribute, and then a description attribute.
And over.
So we can just provide all these details over here.
And I think that subsequent, these
are the same things like we'll be configuring each
and everything.
But one of the most important thing over here
is the syntaxes that we follow.
You have to understand how exactly the syntaxes are
working.
So let's say here, if you see, this product characteristic
value is a type of object.
What exactly is this object, basically?
If you see over here, this identifier
is a kind of type of string.
So you have only a single key and the corresponding value.
That's all.
But if you see over here, identifier map is an object.
It has two different keys and the corresponding values.
Like description is a key, and then properties is a key.
And then product characteristic is also kind of type of object.
It has description over here and properties over here.
And properties has inner characteristics as well,
like code.
And it is a type of string.
This type itself is like built-in thing
for RAML specification.
If you wanted to learn more about RAML
and what is the syntax of RAML, I
would suggest you go through this documentation
to have the detailed implementation aspect.
So this is how a RAML works, basically.
I'll be sharing these links later once you're done with it.
Yeah.
So that's how it works, basically.
And then, yeah.
So this is a specification how an item should be.
So let's say there are nested syntaxes or nested
specifications where a product depends upon
another attribute of the product itself,
like that we have defined, like identify that we have defined.
So here in that scenario, we'll be using dot syntax,
basically, common dot resourced ID.
So basically, we have already defined this resourced ID
over here.
And we'll be utilizing this resourced ID down there.
So for product variant, we'll be mapping
a one attribute which is already present to the other attribute.
So let's say, to help you understand more efficiently,
so let's say you have a scenario like this.
Like for food, you have a type identifier of chocolate.
And this question is?
And name should be?
In your organization, you have a specific rule
that each and every product name should
appended with the unique identifier with that.
So you have to enforce that.
So let's say in those scenarios, let's say,
for this specific use case, what you'll do is just say,
Cadbury.
And then you'll be just appending type dot identifier.
So that's how you'll be using nested syntaxes
with the help of dot syntax.
So this dot syntax is being used, like these scenarios,
where you want to reuse the attributes that
are already present so that you can maintain
the first and foremost advantage here
is you'll be maintaining a single thing in a single place.
So the maintenance becomes very easy over here.
And there is, like, if you duplicate things,
then let's say you wanted to change,
identify in later point of time, and you have to trouble
making, modifying them each and everywhere.
So that is very painsome.
So this is not implemented or distributed
or compensated anywhere.
This is the best practice, you can say,
with the help of while developing the APIs.
So that is the first thing.
And when it comes to, yeah, if you see the question marks
syntax over here, this question marks syntax basically
provides us a description that this is a kind of optional
parameter, basically.
So let's say, first, you wanted to have, I mean,
populate all the attributes.
You can, I mean, you are comfortable enough
to skip a few of the attributes.
So in that scenario, you'll be just providing
these optional attributes, basically.
Question mark syntax comes for that, basically.
So that's how the whole syntaxes work here.
And again, when it comes to the actual root file,
you'll be just importing the file, the API specification
file that we have discussed till now,
into the root specification.
And then here, we'll be providing the responses,
basically, how this response works, basically.
We'll be describing a product response
as one of the response implementation.
And you'll be using, I mean, you're just
using this product response.
So let's say we have already discussed this one.
Yeah, the response characteristic.
So the response works like this.
If you see over here, you are using capital P
and fetching whatever that is there inside this path.
If you see over here in this path,
the pattern of the response should be like this.
So yeah, we'll be defining each and every property on top
and just utilizing already defined properties
here in the bottom.
Like for description, we have validated
what a description should be like,
what the pattern of the description should be like,
what is the minimum length of the description should be like.
And we'll just describe a product
should have description, that's all.
We'll not describe over here how a description should be.
So you got it right.
So basically, we'll be building step by step.
First, we'll be describing a description.
And then each and every component
that should be present inside a product.
And then we'll be framing a product over here.
And then we'll be using a product
inside the response itself, product response.
So that's how API specification should
be built for RAML language.
And similarly, we have product collection response
and product variant collection response.
So these are like, if you can just go through here,
if you have some time.
But just like mostly similarly, these
are the same implementations.
It's just that it's another file.
It serves another purpose.
That's all.
Yeah.
Just a moment.
So yeah, till now, we've just gone through a specification
how an API specification should be,
or what exactly the keys, or what exactly the values should
be, or what exactly the pattern should be, and all.
But till now, we haven't gone through any route,
specifically, till now.
But we already discussed that we're
going to follow REST API implementation here.
So for this REST API implementation,
a route should be an important point.
So what exactly is a route?
A route is a place where we host up API.
So let's say you have a couple of APIs.
Let's say product API, which provides
a list of all the products.
But again, one of the best practices to design an API
is to, you should not use plurals over here.
You should only use singlers.
That is one of the best practices
to maintain uniformity across all the routes
that you want to design.
That is one thing.
And let's say this product gives you a list of products.
So let's say you wanted to have a specific product
and how it works.
So this curly-based syntax provides you
a variable here.
So this is not hard-coded to product one.
That might be a clothes or product slash chocolate
or product slash biscuits or anything,
which is specific to that specific implementation.
And we can also have or delete as well.
Each and every route is associated
with their API request, basically.
So for product, it's like a get API.
And for getting each product, we have a get API.
And let's say for creating a new product,
we have a post API for only product endpoint.
I think I don't.
Wait, let me just.
And you can also have or delete API.
Slash product slash product ID.
And you wanted to edit a specific product, which
is already there inside the application.
In that scenario, you have put API.
These are different APIs that you
want to be designing inside your route project.
So if you go here, we'll be designing a couple of APIs.
So let's say for categories API, let
me just first describe you on product scenario.
I mean, that will be mostly the same for others as well.
So this get, this is API method, basically.
And the next one is route.
So an API method and a route should be always
coupled with each other.
A route without an API method is meaningless.
They should be always coupled with each other.
That's how it works in general scenarios, basically.
And let's say you wanted to get all products.
Then you hit get API with all products.
And for get a single product, we can hit get product and product
ID.
And for posting a product, we'll be passing a payload over here.
If you see over here for post request,
we don't have any slash after product and payload.
So what I meant to say is basically,
payload is something we'll be giving inside the body.
So for this, you need to understand
one of the prerequisites like what are headers, what are URI,
what are headers, what is body.
So basically, an URI is a location
where you are hitting an API.
And the headers are some things that you're
going to pass to that URI while you are hitting the web request.
And typically, headers mostly consist
of authentication, authorization mechanisms,
and some of the, and you'll be hitting this specific API
request with the help of API client.
So an API client, mostly you will be having Postman.
Let's just keep it under tool thing.
You have to understand Postman as well.
So Postman is a tool which is an API client, which basically
helps us to hit an API while testing it.
I have created that, one of the hands-on application as well.
I mean, hands-on implementation as well for that,
for Postman, which is tightly coupled with the MuleSoft.
And we'll be passing headers with that.
And the body is actual payload that we want to pass,
like the details of the product, like the name of the product,
the description of the product, and price of the product,
and all those things we'll be providing over there
inside the body itself.
If you want to specify or delete a specific work inside
the database, then we will be hitting this Delete API.
And then if you see over here, Put and Post,
mostly does the same thing.
Then what exactly is the difference between Put
and Post?
So basically, Put basically doesn't
care about what exactly is already there inside the system.
It don't care about that.
It don't care about them.
Whatever you care, I mean, whatever
you provide to Put as a payload, it just takes them
and put it into the database.
I mean, if there are already resources that are already
present inside our database, it just
overrides them without caring about them.
But Post is not like that.
Post provides the appending operation
rather than overriding operation.
So one has to do more careful operations with the Post
or Put API when it comes to Post.
So that's how this API specification
works inside any point.
Do you have any questions till now or we're good to go?
OK.
Cool.
So let's come to RAML definitions
and how these things work with RAML basically.
So we wanted to create a MuleSoft application which
does all these things, whatever I told right now,
with the help of just a Mule based syntax.
That's how we'll be designing our APIs over here.
We call that as routes over here.
So basically, we have this parent route.
So inside the RAML, we have this syntax called parent route
and child route.
So basically, this parent route consists
and the child routes will be appending to the parent route.
So let's say we have categories.
So let's say we have a few categories here
and user wants to get categories.
So in that scenario, let's say user hits slash categories
and he don't care about what about the categories that
are already present in database and all.
He just wanted to get the categories.
So if you see over here, we have implemented
get-based description.
So it already should be understandable.
This uses get request.
And then we are providing the description over here,
what exactly this API route does.
Basically, it helps us to get the categories.
So this display name is described under our Anypoint
platform, basically.
Once we deploy it or once we run this using Anypoint Studio,
we'll be getting much more understanding
what this display name and what these query parameters are.
But right now, the query parameters are like,
let's say for this URL, if you can see,
let me take a description of one URL.
If you see over here, you see a question mark over here.
And then you'll be passing a search parameter set.
So the search key and then search value.
Even though YouTube works, the URL works,
let's say if you don't pass any of these parameters,
let's say we just did not pass any value over here.
So there is a fallback page for this to handle this request.
So these are called as query parameters inside API
languages, basically.
So these query parameters are like strings.
I mean, we'll be describing query parameters over here.
So let's say for this specific implementation,
we have this name as one of the query parameter.
And it is an optional thing.
Let's say, for example, YouTube, we have this query parameter
as chalk, I mean, the search query as query parameter.
If we want to replicate the same thing inside our specific
implementation, the name can be replaced by search query.
This is like read-only editor.
But yeah, name can be replaced by search query.
You got it right?
So how it works, basically.
You'll be designing these YouTube implementations
with the help of this new software queries languages,
actually.
And then what are the responses that you want to provide?
Let's say here, I did not provide any response over here.
And it just fall back to a no results found page.
But let's say if I have provided a write query,
if I go to the developer tools, again, you
see the status, 200 status.
How exactly is this implemented?
It will be the help of this one, the response status 200.
So basically, you might know about 404 status,
page not found while navigating the internet.
That is globally known response code.
Apart from those response codes, we have a lot of response codes.
Couple of response codes are 2xx, 3xx.
These are prerequisite.
OK, so 2xx series, it corresponds to the success responses.
So basically, whenever, let's say, you hit an API,
and you got a response.
So you should tell the person who calls this API
that everything is done, and you got a response.
So that is a specific response code
that we're going to be delivering.
So since this is a get request, mostly,
for most of the get requests, it will be 200 response code.
Let's say for post request, we have this 201 response code.
201 stands for our resource created, basically.
So it's like one of the, it is also
one of the positive response code.
All the response codes that are present in 2xx range,
they're just positive response code.
It corresponds to one or a couple of operations.
If you want to read more about them, we can have.
So these are history status code.
As I told already, 200 is created,
the resource created one, 200 is OK one.
OK in this sense, it doesn't have any meaning with it.
For 200 response codes, we should always
be looking at the body.
What exactly is written for us?
In our scenario, let's say we have designed our application
to provide a response for a get request.
We have designed our application in such a way
that let's say for this specific endpoint,
it is categories response code.
So we'll be going over here and hitting an API,
and we'll be getting category response.
So the category response would be something like this.
This would be the one.
So we'll be having identifier.
So this is the unique identifier.
What is the name of this category,
and what is the description of this category?
So this is the response.
But if you can see over here, this response
is described in terms of YAML.
But typically, this is not the way.
Each and every response will be on JSON format only.
At least open API specifications are concerned.
But right now for our MuleSoft implementations,
we are going forward with the YAML-based approaches,
like all the retail, I mean the root implementations
we have this.
But if you see here body type, it is application.json.
I've told all the kind of implementations you have,
it will be on JSON itself, not the YAML.
But we'll be defining everything in JSON format,
sorry, in YAML format, so that we
can define the responses that will be served by the API
request.
So the response will be inside the,
I mean the response will be of JSON format.
But the specifications, what we provide
to serve that responses will be on YAML format.
So that's the way it works.
Then we'll be defining the response,
and we'll be providing the type of the response.
So basically, there is a hard rule
that this type should be already defined type in this file.
Let's say we have category collection response.
If you go here, we have defined how a category collection
response should be with the help of properties
called categories itself.
If you go here in the taxonomy, we
have this taxonomy defined, already defined over here.
So we'll just be using that.
So this is like a layer by layer implementation.
First, you will be defining this layer
and just pushing that into different files
so that you can reuse everywhere inside another project as well.
And once you define that somewhere else,
you're just importing that into your current project,
I mean into your current implementation API,
and you'll be implementing that.
How you want to implement that is just bringing that
and using dot syntax that I've already told.
So by using this dot syntax, you'll
just refer already imported implementation.
You are just fetching the details called category.
But what exactly is this category?
You don't have any idea right now, right?
So if you wanted to understand what exactly is this category,
you have to go to this specific implementation
inside the Exchange modules and then just go over that,
understand how this category is defined,
what exactly the rules for this category.
I think it consists of something like this
that we've already discussed.
I'll show you the scroll.
Something like this.
So we have product and each and every product
has its own properties, like identifier, category,
and brand and all.
So for product also, I mean for category also,
it has something like identifier, category, ID,
in which year this category is introduced,
or what is the turnover of this category.
So those are the attributes that are
present for that specific implementation.
So right now, we'll just import that and use this
so that we can have the usability over there.
That is one thing.
And it comes to the response how a single API endpoint
is defined.
This is one of the REST API that we have defined,
I mean discussed for ML implementation.
Let me just take a break.
I'll just get some water.
We'll connect after 10 minutes maybe.
Yeah, sure.
You back?
Let's just start.
So that would be one of the get requests
that we have designed till now.
So let's say we have a few more requests that are tightly
coupled with this get request.
Let's say, for example, for YouTube is, let's say,
you want to go for subscriptions.
If you see inside the feed, you have a different endpoint
called subscriptions endpoint.
Let's say you don't want to go for subscriptions.
Let's say you have just root with the feed.
It will provide a list of recommendations for you.
And you don't want to have random recommendations
from random YouTube.
So you want to get only recommendations
from your own subscriptions.
So you just hit this subscriptions endpoint.
So I want you to understand how we
are passing different parameters or different route paths
that we are providing for a root path.
That's how we'll be designing a new path inside this category
endpoint.
So that's where we'll be giving a slash inside category
endpoint.
If you see over here, there is a new line
between in the 61st line and 44th line.
If you compare the new lines here, if you see,
it is the 61st line is the child of 44th line.
So it is one of the object of 44th line.
We inside the YAML, we call that as object specifications.
So if you want to understand it in a layman language,
let's say categories provides us a list of categories.
But if you want to fetch a specific category,
then you'll be passing category ID.
This is the random ID that we provide.
So this random ID is captured by the variable called category ID.
Since this is variable, it will change every time.
It's not a static one.
So that's the reason we have provided
static paths or static routes as a normal text.
But variable ones, we should always change them
as a category ID.
So if you see over here, it will have this category ID
as parameters.
So what exactly the parameters?
This URI parameter is a built-in RAML languages methodology.
It will describe what exactly the parameters
that we will be passing.
So here, the parameter is category ID.
This is much more similar to something like this.
And then the actual category ID, the question mark,
and then actual category ID.
So that's how this URI works.
And here, we have provided the configuration
for what this specific endpoint should do.
It will also just get the category
of a specific category ID, which is already passed
as a URI parameter.
And also here, we have responses.
Since this is a GET request, we'll
mostly design this specific GET request
to provide a 200 response, similar to that itself.
And if you see over here, there is another route path.
This route path is mostly the product path.
Basically, you have to understand like this.
It's like kind of training, basically.
We have categories as a parent path.
Inside this categories path, we have a lot of categories.
And for each and every category, we have lots of products.
For each and every category ID, you have different products.
So it's a kind of catalogs, you can say.
So this can be translated like this.
We have categories.
And then you'll be passing a category ID for this.
You wanted to fetch what are the products for this category ID.
Then you'll be hitting this API, something like this.
Let's say when user hits this request inside his API client,
whether that may be postman or call request or a typical WGET
response or WGET request, that request
will be routed over here, this product here.
So again, this is GET API.
So we'll be providing a display name
and what is the description that this API does
and what are the query parameters
and what the response it provides.
So that is till now.
When it comes to categories API, this
is one of the parent and you have this another parent called
products parent.
So basically this product parent, again, this is the same.
Till now we are seeing only the GET requests.
Right now we haven't seen any post request.
We'll be creating a post request right now by ourselves
so that we'll have idea what should be the things that we
are going to take care while creating a specific API
implementation by ourselves.
Let me just create a playground over here.
You can see the screen.
Basically it's in design center right now.
Yeah.
So right now we'll be creating everything from scratch.
At least for post API, we're taking
considering to whatever we discussed till now.
So first, this comes under like API designing part of MuleSoft
basically.
So let's say you wanted to create a post request
or post catalog or post product.
Let's say the main agenda for this API
is to post a product to the database.
This is one of the agenda.
And first we have to understand what is the route.
Typically the best convention is to follow the same routes that
are already there till now.
So for us, it is already there.
The product is already there.
So we'll just use that itself.
Then user will be confused.
What is the get request and what is the post request?
So we have to handle that.
While pushing the API itself, I mean while hitting the API,
a user has this control over API method he's posting
or hitting towards.
So I mean we have this hands on session with the postman tool.
So with that we can understand how the user can understand
what is the request that he's hitting.
So for this we'll be providing the description.
This description will just provide
like to post a new product to catalog.
And what is the API method that we have discussed?
It is post method.
So whenever let's say user came here and user
wanted to see what this API does in a very concisive manner,
here we'll be providing a display name.
This display name is an attribute,
which is a built-in attribute again in Neo Soft.
This display name provides us an option
to provide a concise name so that user can see what exactly
this specific API does.
And then we'll just provide the description of this request.
The first and foremost thing is unnamed syntax.
So white lines that we are having right now.
If you have any mistake over here,
then the program API will not be compiling
and it will be throwing a lot of errors.
We'll be happy to troubleshoot what exactly
the error has happened.
Mostly for beginners, the errors will
be coming from the white spaces for this YAML
until you get familiarized with the YAML-based syntax
and get used to YAML-based syntax.
If you have provided a single white line over here,
then you can mistake.
It's good to have the same description.
But if you have these two descriptions,
what exactly is the difference between them?
This description is more concentrated
towards this root level API.
But this description is mostly concentrated
towards the post method.
Let's say for only product API, you
have two methods, get method and post method.
Then you should have two descriptions.
For get request, there should be one description.
And for post, there should be another description.
So this is HTTP method level description.
And this is API level description.
But the best description for this is product management
to avoid confusions.
And query parameters.
So based on your specific application
and based on your specific business requirements
and based on the consumer's applications requirements,
we'll be defining our query parameters over here.
There are a lot of query parameters,
and all of them are dynamic.
So MuleSoft is not going to complain
that you have provided this specific query parameter
or something like that.
But the only thing which should be defined
is this query parameters definition over here.
This should be standalone and MuleSoft defined syntax.
So this should not be changed, even capital P to small p.
You can't modify that.
But after this query parameters, you
can write anything of your wish.
Basically, all of these are dynamic streams.
Let's say for this specific product designing,
we want to maintain price of the product maybe.
We have that.
And that should be a compulsory query parameter.
So we'll not be passing any question mark symbol over there.
And we'll just describe mostly this price
should be a floating point integer.
So and again, yeah, just one thing
I forgot to implement, I forgot to type.
So we have data types inside this MuleSoft, basically.
A data type in a MuleSoft, there are a lot of data types.
We have strings, which are alphabetical strings,
basically.
We can call them as sentences or words
with a white space between them.
And we have integers.
Integers consist of only whole number integers, basically.
It can't contain decimal values.
Basically, for decimal values or pointed numbers,
we'll be having float as a data type.
And we also have Boolean.
Let's say for a few of the scenarios,
there might be true or false implementations.
In those scenarios, Boolean will be the better data point
that we're going to choose.
So mostly, these will be the and we also
have this object data type.
So for custom implementation, so we
have this custom object data type.
So mostly, this is not a preliminary data type.
This is more complex data type, which
consists of all the strings, integers, floats.
These all will be embedded inside this object data type,
basically.
That's how it works.
Back to implementation.
We have this price and it corresponds to float.
And again, let's just think of another thing, maybe
name of the product.
Typically, names will be sentences.
So we can use string.
And based on business, we want to have expiry date
of this product as well.
That is one of the hard rule of implementation
that people want to give.
So expiry date.
So it depends upon us how we want
to design this specific API implementation.
And expiry date mostly would be strings.
But we also have enum data type that
is specific to Java implementation.
And we have this pallet-based methodology over here
inside of MuleSoft.
We'll be discussing that with the help of Anypoint Studio.
When it comes to there, I'll be describing
how to integrate Java-based scripts or Python-based scripts
inside MuleSoft.
So these are the query parameters.
Now let's go with the response model.
So for responses, we have to handle a couple of scenarios
over here.
Because since this is post request,
this is much more complicated when
we compare to get request.
Because get requests are mostly 200 requests or 404 requests.
That's all.
But for post request, you have to handle
a couple of other scenarios as well.
Let's say first we'll go with positive scenarios.
201.
So for this specific response, let's say
we're providing a lot of typos.
So for response, we have two things, which is embedded.
One is response status code, which is this one,
and response body.
So basically, a response body consists of a JSON format,
as I've already told.
So basically, this JSON format will be describing MuleSoft
to use JSON as hard rule.
So similarly, we'll be providing application slash JSON
over here so that MuleSoft can understand what exactly
the response format should be.
And then type.
We'll be just using this JSON response over here
as an example, since this is product implementation only.
But this should be already coming from already
other developer module.
Basically, I've told you, we'll be breaking up modules
and just importing the modules again and again
so that we can achieve modularity
inside the MuleSoft implementations.
So that's why we're just referencing this over here.
Or else we have to start defining type from here,
and it will be like multiple nestings,
and the project is going to be messy
if we don't reuse things over different implementations.
So that's how it works, basically.
So you have this.
You've got my point.
So this is one of the best positive scenarios.
Let's say user is not at all authorized
to do this operation, like creating a product.
So in that scenario, we have to provide bad response.
So 401 provides us unauthorized.
So how are we going to be handling this 401 response?
So basically, we'll be providing first description to this.
Let me just capture the description,
and then you want to write.
And then we don't pass any body for 401
because it itself is like something is fishy
because user did not pass any headers
to this specific implementation.
So we don't want to reveal any details.
So that's why it's best practice to provide
and not to provide any body, but no one stops you
from providing body.
So we have to adhere to the best practices
We should only provide user is not authorized
to create the product.
And we don't need to pass even the username as well
because that is deviating from the best practices.
Let's say user is authenticated to the implementation,
but he's not having enough privileges to do this.
So in that scenario, we'll be passing 403 over here.
So that's how this will work.
So mostly, this will be the implementation scenario.
Maybe let's go for one of the exercises right now.
Feel comfortable?
Should we do this?
Sure.
Let me just paste this in chat box right now.
And you can share your screen and start creating
one of the implementation.
I'll just paste this in the chat box.
So that you can have some difference.
You have any notepad with you?
Notepad?
Let me just paste one.
So I'll just give you a scenario basically right now.
We wanted to create an API for one of the applications
which provides a list of services that
is going to be providing.
So the endpoint will be slash services.
And we want to handle posting a new service
and deleting an existing service.
Thank you.                

on 2024-03-29

Visit the Managing APIs with MuleSoft Anypoint Platform Training Course course recordings page

1 videos