mesh

Opening Keynote: Enter The Mesh

Posted on Updated on

At 9:00am on 10/22/2018, I attended the Opening Keynote session: Enter The Mesh, presented by Burr Sutter of Red Hat, at the All Things Open Conference at the Raleigh Convention Center in Raleigh, NC

Opening Keynote: Enter The Mesh

As before, Burr remains one of the fastest-talking-Hawaiian-Alabamian I’ve seen.  While I”m not terribly familiar with this software, I deeply appreciated his approach. Using his background to discuss his journey through open source as one similar to a martial arts movie, Burr moved us through a series of skill challenges, areas of growth and training, and mastery techniques required to excel. Using a close understanding of self-discovery, challenges followed by growth and training, he slowly built from an allegory of a talented and motivated individual growing into a seasoned professional into a short training sequeway. From the segueway, he rolled into specifics with Qpid dispatch, Amazon Web Services, and github.com

Service Mesh is also translated as a “service grid” as an infrastructure layer for inter-service communication. Buyanant CEO Willian Morgan explains what Service Mesh is in his article WHAT’S A Service Mesh? AND WHY DO I NEED ONE? Why cloud native applications require Service Mesh.

Below is an explanation of Willian Morgan ‘s Service Mesh.

It’s responsible for the reliable delivery of requests through the complex topology of services that comprise a modern, cloud native application. In practice, the Service Mesh is typically implemented as An array of lightweight network proxies that are deployed alongside application code, without the application needing to be aware.

Service Mesh Features

Service Mesh has the following characteristics:

  • Intermediate layer of inter-application communication
  • Lightweight network proxy
  • Application non-aware
  • Decoupling application retry/timeout, monitoring, tracing, and service discovery

Currently, two popular Service Mesh open source softwares, Istio and Linkerd , can be integrated directly into kubernetes, of which Linkerd has become a member of CNCF.

Understanding Service Mesh

If you use a sentence to explain what Service Mesh is, you can compare it to TCP/IP between applications or microservices, responsible for network calls, current limiting, fuses, and monitoring between services. For writing applications, there is usually no need to care about the TCP/IP layer (such as RESTful applications over the HTTP protocol), and the same use of Service Mesh does not require things between services that were originally implemented through applications or other frameworks. For example, Spring Cloud, OSS, just give it to Service Mesh.

Phil Calçado explains in detail the ins and outs of Service Mesh in his blog, Pattern: Service Mesh :

  1. Connect directly from the most primitive hosts using a network cable
  2. The emergence of the network layer
  3. Control flow integrated into the application
  4. Decompose the control flow outside the application
  5. Integrated service discovery and circuit breakers in the application
  6. There are packages/libraries dedicated to service discovery and circuit breakers, such as Twitter’s Finagle and Facebook’s Proxygen , which are still integrated inside the application.
  7. Open source software for service discovery and circuit breakers such as Netflix OSS , Airbnb’s synapse and nerve
  8. Finally appeared as the middle layer Service Mesh of the microservice

The architecture of Service Mesh is shown below:

Service Mesh runs as a sidecar, transparent to the application, and traffic between all applications passes through it, so control of application traffic can be implemented in the service mesh.

How does Service Mesh work?

Let’s take Linkerd as an example to explain how Service Mesh works. Another implementation principle of Istio as Service Mesh is basically similar to that of linkerd. Subsequent articles will explain how Istio and Linkerd work in kubernetes.

  1. Linkerd routes the service request to the destination address, and determines whether it is a service in the production environment, the test environment, or the staging environment according to the parameters in it (the service may be deployed in these three environments at the same time), is it routed to the local environment or the public cloud environment? All of these routing information can be dynamically configured, either globally or individually.
  2. When Linkerd confirms the destination address, it sends traffic to the corresponding service discovery endpoint, which is the service in kubernetes, and then the service forwards the service to the backend instance.
  3. Linkerd selects the fastest-responsive instance of all instances of the application based on the delay in which it observed the most recent request.
  4. Linkerd sends the request to the instance, recording both the response type and the delayed data.
  5. If the instance hangs, does not respond, or the process does not work, Linkerd will send the request to another instance and try again.
  6. If the instance continues to return error, Linkerd will remove the instance from the load balancing pool and periodically retry later.
  7. If the requested deadline has passed, Linkerd actively fails the request instead of trying to add the load again.
  8. Linkerd captures all aspects of the above behavior in the form of metric and distributed traces that are sent to the centralized metric system.

Why use Service Mesh?

Service Mesh didn’t bring us new features. It was used to solve problems that other tools have solved, but this time it was implemented in Cloud Native’s kubernetes environment.

In the traditional MVC three-tier web application architecture, the communication between services is not complicated, and it can be managed within the application itself. However, in the case of today’s complex large websites, the single application is decomposed into many micros. Services, service dependencies and communication are complex, with the “Fat Client” libraries like Finagle developed by Twitter , Hystrix by Netflix, and Stubby by Google. These are the early Service Mesh, but they are all suitable for specific The environment and the specific development language are not supported as platform-level Service Mesh.

Under the Cloud Native architecture, the use of containers gives more possibilities for heterogeneous applications, and the kubernetes enhanced applications have the ability to scale horizontally, allowing users to quickly compile applications with complex environments and complex dependencies, while developers Focus on program development without undue attention to the cumbersome things of application monitoring, scalability, service discovery, and distributed tracking, giving developers more creativity.

 

  

Additional Backup Slides To Keep Us Motivated

    

deck available  bit.ly/ato2018