Why Micro-Services are Important Today?

Why do we talk about micro-services today more than ever before? How is the transformation in infrastructure, ‘… as a service’ delivery model contributing to the software architecture?

Why Micro-Services are Important Today?

Software Architecture
Philippe Kruchten, Grady Booch, Kurt Bittner, and Rich Reitman derived and refined the definition of architecture based on the work of Mary Shaw and David Garlan (Shaw and Garlan 1996). Their definition is: “Software architecture encompasses the set of significant decisions about the organization of a software system including the selection of the structural elements and their interfaces by which the system is composed; behavior as specified in collaboration among those elements; composition of these structural and behavioral elements into larger subsystems; and an architectural style that guides this organization. Software architecture also involves functionality, usability, resilience, performance, reuse, comprehensibility, economic and technology constraints, tradeoffs and aesthetic concerns.”

In Patterns of Enterprise Application Architecture, Martin Fowler outlines some common recurring themes while explaining architecture. He identifies these themes as: “The highest-level breakdown of a system into its parts; the decisions that are hard to change; there are multiple architectures in a system; what is architecturally significant can change over a system's lifetime; and, in the end, architecture boils down to whatever the important stuff is.”

“Loosely Coupled Highly Cohesive”: It is not new, it has been the fundamental approach right from the beginning of Software engineering. For Ex. from the book published in 1979, “ Structured Design: Fundamentals of a Discipline of Computer Program and Systems Design “: The Fundamental Theorem of Software Engineering basically says that we can win if we divide any task into independent subtasks.
The book emphasizes the following:

  • The cost of developing most systems is largely the cost of debugging them
  • The cost of debugging is essentially equivalent to the cost of errors committed by the programmer/ analyst
  • The number of errors committed during the design, coding, and debugging of a system rises non-linearly as the complexity (which may be thought of as roughly equal to the size) of the system increases
  • Complexity can be decreased (and, thus, errors and the cost of developing the system) by breaking the problem into smaller and smaller pieces, so long as these pieces are relatively independent of each other
  • Eventually, the process of breaking pieces of the system into smaller pieces will create more complexity than it eliminates, because of inter-module dependencies --- but this point does not occur as quickly as most designers would like to believe.

Separation of Concerns:
In computer science (Wikipedia), separation of concerns (SoC) is a design principle for separating a computer program into distinct sections, such that each section addresses a separate concern. A concern is a set of information that affects the code of a computer program. A concern can be as general as the details of the hardware the code is being optimized for, or as specific as the name of a class to instantiate.

A program that embodies SoC well is called a modular program. Modularity, and hence separation of concerns, is achieved by encapsulating information inside a section of code that has a well-defined interface. Encapsulation is a means of information hiding. Layered designs in information systems are another embodiment of separation of concerns (e.g., presentation layer, business logic layer, data access layer, persistence layer).

The value of separation of concerns is simplifying development and maintenance of computer programs. When concerns are well-separated, individual sections can be reused, as well as developed and updated independently. Of special value is the ability to later improve or modify one section of code without having to know the details of other sections, and without having to make corresponding changes to those sections.(wikipedia)

These have been the principles behind all the application architecture trends, N-tier architecture, Service Oriented Architecture and today “Micro-Services”

Evolution
The software architecture has evolved over time from Main frame, where all code ran centrally in one big machine. In the PC era, the same copies of the application code distributed and ran in each individual computer. Then in the evolution of PC Networking and the Internet connected the PCs, to distribute the code between hosted server and client PCs – Client/Server architectures. The evolution in the networking, brought the raise of Data Centers, allowing enterprises rent the infrastructure as needed. The Software vendors invented short lived, Application Service Providers business model, but the Software as a Service with multi-tenant architecture enabled the software vendors quickly re-architect their application and offer more value to their customers.

Why Micro-Services are Important Today?

The software architecture with new infrastructure capabilities are emerging. The cheaper and faster internet, hardware cost and virtualization, software defined networking enabled the IaaS model with virtually unlimited and disposable computing power. The applications today will have to be re-architected to benefit in the new IaaS model, to consume resources as per demand automatically. Micro-services architecture is one such that fits well to realize this benefit. It can be seen as SOA within an application.

Along with IaaS, the PaaS offerings reduces the overhead on the Software producers to own and maintain the dependent software that are available as a service in the platform.

With micro-services architecture, organizations can optimize beyond the operational cost of application’s resource consumption. This new approach on treating the application as smaller independent modules changes the way organizations should restructure its development team, application life cycle, deployment processes and support & maintenance. This also brings a new challenge with new resources being introduced during the runtime to accommodate higher demand. For ex, create new VM, deploy code, apply environment configurations such as network & security and add to a load balancer etc. This forces high degree of automation, from resource creation to application deployment, environment configuration etc.

For example, a company that does medical transcripts, download documents that needs to be transcribed every day. The number of documents, size (number of pages in the documents) and content (lines in each page) varies and so the processing time for each document varies due to these factors. The employee experience depends on how much time it takes to transcribe. Manually only one document can be processed at a given time, the company needs more employees to transcribe more documents in parallel. The company has an SLA between 12 ~ 48 hours depending on the number and size of the documents. In such a scenario the company cannot add more work as per the need, as it requires time for training the resources and will not be cost effective if the volume goes down erratically.

Let’s say the transcription is automated using a software process, with the new algorithms it can produce up to 95% accuracy. Now each document can be processed in parallel, the number of documents that can be processed depends on the computing power on the Server.

In the pre-cloud era, in the monolithic software architecture, the servers needed to be sized for the maximum number of documents that the company may receive. Even though the maximum capacity is required only 2 weeks in a year.

Though this software can be migrated to cloud as it is, there might be a need to do some load balancing depending on the volume of documents. This means there might be a requirement to add more servers as the volume goes up and remove servers as the volume goes down. Remember, in the deployment architecture each server has the complete software installed and configured to download, process and upload the output. So again, this certainly enables some elasticity, may be with minimum code change as a first step towards migrating to cloud.

If we refactor the code and breakdown this application in high level activities as follows,

  • FTP polling: To check for new documents and download
  • Document Verification: To verify the document validity for corruption
  • Processing:
    • Page breakdown: break each document into pages
    • Page Processing: transcribe each page
    • Output Assembling: assemble the output of each page with right sequence
  • Upload the output to ftp
  • Notify

Now each of these activities can be a micro service that can function independently to do the task it is designed to do. To process a document, each activity may be designed to function asynchronously, so they don’t wait for each other to complete. We may assume, one activity may take more time to complete than the other activity. Each activity then can be configured to scale independently up or down depending on the volume in its pipeline and each of this function will have different requirements on compute and memory power etc. Each of these activities may have different resource requirement and it will be provisioned accordingly. Each activity will run on its own server.

Now it might look like, we are adding more resources such as provisioning one server instance of an activity and it costs more to do less. Yes, but it will cost less to do more – Economy of Scale. Now the company can do more volume than ever before, and scale up or down the computing power as needed for only those modules that becomes the bottleneck.

Adapting this new approach for micro services, brings new challenges in deployment, support and maintenance. For instance, when we have limited number of servers, the IT may know the servers by name, which modules gets deployed where and developers may also login to troubleshoot for issues.

In this new approach, the resource gets created dynamically with code or templates, the server name or IP address is assigned only for the lifetime of that server at runtime. Now the IT and development team cannot depend on a server name or fixed IP address. From the deployment perspective, the application will be deployed using configuration tools to automate the process as and when new resource (server) is created and the corresponding module is deployed depending on its purpose. And if for any instance a server/module goes down or fails, it should be replaced with new instance of the server/module. For Developers, the application logs along with system logs should be collected periodically and stored outside the servers for troubleshooting.

 

Leave a Reply

Your email address will not be published. Required fields are marked *