Istio has become a very popular service mesh for Kubernetes and one of it’s most promising features is it’s telemetry add-ons.Through these telemetry add-ons we can very easily visualise how traffic is flowing through our Kubernetes cluster and gather other information about our application through the metrics collected by the side car proxies running with our application pods.Kiali is one the telemetry add-ons which we will explore in this blog
What is Istio?
Istio is an open source service mesh platform that provides a way to control how microservices share data with one another. It includes APIs that let Istio integrate into any logging platform, telemetry, or policy system. Istio is designed to run in a variety of environments: on-premise, cloud-hosted, in Kubernetes containers, in services running on virtual machines, and more.
Istio’s architecture is divided into the data plane and the control plane. In the data plane, Istio support is added to a service by deploying a sidecar proxy to the existing application container in a pod. This sidecar proxy sits alongside a microservice and routes requests to and from other proxies. Together, these proxies form a mesh network that intercepts network communication between microservices. The control plane manages and configures proxies to route traffic. The control plane also configures components to enforce policies and collect telemetry.
What is Kiali?
Kiali is a telemetry observation console for Istio service mesh. It provides observability for our running mesh, letting us quickly identify issues and then troubleshoot those issues. Kiali offers in-depth traffic topology, health grades, powerful dashboards, and lets us drill into component detail. Kiali offers correlated views of metrics, logs and tracing, as well as strong validations to pinpoint configuration issues. Kiali provides several wizards to help us add services to the mesh, define traffic routing, gateways, policies and more.
Now that we know what is Istio and how Kiali is helpful in visualising the traffic flow in the Kubernetes cluster,we can go ahead and start the hands-on part
Irrespective of where the cluster is running,i.e,GKE or EKS or Minikube,the steps for installing istio are the same.To install istio first we will install the istioctl binary and then install the demo profile of istio using it
curl -L https://istio.io/downloadIstio | sh -
The installation directory contains:
- Sample applications in
istioctlclient binary in the
istioctl client to your path (Linux or macOS) and then install istio :
istioctl install --set profile=demo -y
kubectl label namespace default istio-injection=enabled
Now these above steps will install just istio for us but to add the telemetry add-ons we will have to run some separate manifests present in the
kubectl apply -f samples/addons
This will install
Kiali,Jaeger,Prometheus,Grafana and Zipkins all of which provide telemetry features for montoring and visualizing your kubernetes cluster
You can access the kiali dashboard by running the following command
istioctl dashboard kiali
At present since there are no apps deployed in our cluster,so there will not be anything significant on the kiali dashboard
Deploy the Fleet Management Application
In this demo,I will be deploying a Fleet Management microservices application in our k8s cluster.This application has been developed by Richard Chesterwood and you can find the source code from his github repo.The Kubernetes manifests which I will be using for deploying the application can be found in my github repo
To deploy the application run the following command
git clone https://github.com/juzer-patan/istio.git
kubectl apply -f kiali/application.yaml
The output of
kubectl get pods will be something like this
As you can see this application is composed of 6 microservices.If you run
kubectl get svc you will see there is a fleetman-webapp service of type NodePort mapped to port 30080,which is the frontend for this application.Since I have used minikube to perform this demo I can access this service through minikube_ip:30080.If you are using GKE,AWS or any other cloud-native kubernetes service,you can either change the fleetman-webapp service type to LoadBalancer or access it using the kubectl port forward command
Visualize the application using Kiali
Now if we go to the Kiali dashboard → Graph select default from Namespace dropdown,we can see the entire graph of how traffic is flowing between the various microservices of our application. There are several graph types to choose from: App, Versioned App, Workload, Service.
- The App graph type aggregates all versions of an app into a single graph node.Thus,if an application “demo” has 3 versions v1,v2 and v3 running in 3 different pods,this graph will aggregate all these 3 versions into 1 single node called “demo”
- The Versioned App graph type shows a node for each version of an app, but all versions of a particular app are grouped together.
- The Workload graph type shows a node for each workload in your service mesh.For now,you can consider a workload equivalent to a deployment.Thus,this graph shows the traffic flowing between the individual workloads or deployments
- The Service graph type shows a high-level aggregation of service traffic in your mesh.Thus,this graph shows the traffic flowing between the different services in our service mesh.
You can see below the Service Graph and Versioned App Graph for our fleet management application
Another feature in these graphs is that if traffic has not flown between two workloads or services for a considerable time then the edge connecting those particular nodes on the graph will be grayed out at first and then removed after some time.
Suspend traffic to a service using Kiali
We can very easily cause an HTTP error whenever a request is made to a particular microservice using Kiali,thus,suspend traffic to that service.In our application,there is a
fleetman-vehicle-telemetry microservice which displays the speeds of the different vehicles on the frontend web application.We will suspend traffic to this
fleetman-vehicle-telemetry using Kiali by doing the following steps :
- Right click on the fleetman-vehicle-telemetry service → Show Details.After this,we will land on the fleetman-vehicle-telemetry service page
- Click on Actions → Suspend Traffic and then click Create.That’s it,this will suspend traffic to the fleetman-vehicle-telemetry service
Now if we go the Graphs tab → Service Graph,we can see that all the requests going to the
fleetman-vehicle-telemetry service have resulted in HTTP error
Also,if we go to the frontend of the web application we can see all blanks in the Speed column which comes from the
Weighted Routing between versions using Kiali
Imagine,for one of our microservices
fleetman-staff-service a new version has been developed recently and we want to test it using a Canary type of Deployment Stratergy,in which 90% of the requests are routed to the pod running the older version of staff-service and 10% of the requests are routed to the pod running new untested version of staff-service
Note :- For the older version of staff-service I have used the image with tag 6-placeholder in the deployment staff-service-old and for the new version I have used the image with tag 6 in the deployment staff-service-new.These docker images along with all the other images used for this demo have been uploaded by Richard Chesterwood on docker hub
First,let’s delete the old manifests and deploy the new manifests
kubectl delete -f kiali/application.yaml
kubectl apply -f kiali/application-new.yaml
Now if we go the frontend of our application,half of the requests will be routed to the older staff-service pod(this has no image of driver) and half of the requests will be routed to the new staff-service pod(this shows an image of the driver).This is the default Kubernetes functionality.
Now to split traffic as we want we need to do the following steps :
- Right click on the fleetman-staff-service service → Show Details.After this,we will land on the fleetman-vehicle-telemetry service page
- Click on Actions → Create Weighted Routing and use the slider to adjust the traffic however you want
- Click Create
That’s it,now 90% of the requests will be routed to the older version staff-service-old pod(which has no image of the driver) and 10% of the requests will be routed to the new version staff-service-new pod(which shows an image of the driver)
We can validate this from the Versioned App Graph as well
Hence,we have achieved a weighted routing type of configuration for our application using Kiali.
Now,coming to the most important part,we did all these configurations in our application without even writing a single piece of YAML.How is this possible?
This is possible because Kiali in the background writes the YAML manifests for us.So,when we suspended traffic or created a weighted routing using the Kiali UI,Kiali creates the YAML files for creating the istio configurations such as VirtualServices and DestinationRules
Hence,in this blog we have understood how to use Kiali alongside Istio for visualizing traffic flow between the different microservices in our application,suspending traffic to a particular microservice and creating a weighted routing configuration for a versioned microservice
We did all this without writing a single piece of YAML and that is the power of Kiali.Kiali can be used for for multiple other purposes which cannot be covered in a single blog.
Connect with me
LinkedIn : https://www.linkedin.com/in/juzerpatanwala