Auto-mount EBS using terraform

Attaching an EBS volume using terraform is straight forward but when instance is started , you find out that your attached volume is not shown even though its attached. Terraform code from my main.tf file  to attach an EBS volume along with the root volume during instance creation is shown below.

Read More

Chef Certifications

So, I thought of going through the chef certification program and achieve these certifications myself. I need to prepare for them. I Started with “basic fluency badge”  certification and achieved the same on 5th Jan 2019. Aha good way to enter 2019.  Achieving first level certification is much simpler, you would get some objective questions to answer. Some of these questions are straightforward and for some, you need to execute a couple of commands to get an answer from the chef server and then select the correct option from options provide to you. My exam started at  10 AM IST and I managed to finish the same in 50 min. There will be one examiner online who will validate your identity before the exam, inspect your room before the exam and will monitor you during the exam. You can not talk or look anywhere other than your screen. I once put my hands on my mouth during the exam, examiner quickly pointed out and asked me to remove my hands from my face. Anyways after the exam, they say that you will get results in 2-3 days but they deliver the result in 2-3 hours.

Now its 2nd badge that I have to earn and its ‘local cookbook development badge’ i thought to provide all the information that I gather during preparation over here, so that all community members may also get benefit from the same. I  have studied required topics from chef documentation and chef rally, Now will be going through certification syllabus and will be writing here whatever I learn on each index item. So let’s go.

Day 1
Topics to cover:

REPO STRUCTURE – MONOLITHIC VS SINGLE COOKBOOK
Candidates should understand

  1. The pros and cons of a single repository per cookbook
  2. The pros and cons of an application repository
  3. How the Chef workflow supports monolithic vs single cookbooks
  4. How to create a repository/workspace on the workstation

Auto-scaling and load balancing

Let’s work on configuring a  highly available and fault tolerant web application. Before we can begin the same let’s try to understand the components in achieving the same.

Elastic Load Balancer:

  • ELB is the  EC2  service which automatically distributes the incoming traffic to all the instances associated with the ELB.
  • Elastic load balancer shall be paired with auto-scaling so that high availability and fault tolerance can be increased. We will also update our self with the concept of high availability just few min later.
  • An ELB has its own DNS record set which means it can be directly accessed from the open internet.
  • Elastic load balancer, when used with VPC, can also act as an internal load balancer to the internal EC2 instances running on the private subnets.
  • The elastic load balancer can check the health of associated EC2 instances and automatically stops sending the traffic to the same.
  • If we bind SSL certificates to the load balancer then we can also reduce the load on backend ec2 instances which would have been generated due to SSL management.(encryption/decryption)

Auto-scaling:

  •  Auto-scaling is the  ec2 service provided by AWS using which you can automate the process of increasing and decreasing the number of provisioned EC2 instances running for your application.
  • It can increase or decrease the instances based on some threshold chosen by you during configuration time.
  • When configuring AWS there are two important components.
    • Launch Configurations: This acts as a template for the auto scaling service to tell us what type of EC2 instance needs to be provisioned.
    • Auto-scaling Group: Here you define all the rules and settings that will determine scale up and scale down of the environment. You define
      • How many minimum and maximum  instances are allowed
      • You define VPC and AZ’s in which new instances shall be launched.
      • You define that provisioned instances shall receive traffic from ELB or, not or it has to receive the traffic than from which ELB
      • You can configure scaling policies and SNS notifications here.

Before we dig into the practical implementation, let’s also review two important types of ELBs

Classic Elastic Load Balancer:

Its designed for simple load balancing. It is desired to use when all of the EC2 instances are serving the same data as there is granular routing rules and traffic is routed to all the instances evenly.

Application Load Balancer:

For complex traffic routing, application ELB is the desired option. You can balance the traffic by using content-based rules. Also content-based rules  can be configured using

  • Path-Based Rules:  Traffic is routed based on URL paths in the HTTP header.
  • and Host-based Rules: Traffic is routed based on host field in the HTTP header

.

Application load balancer  also supports ECS containers, HTTPS, HTTP/2, WebSockets, AccessLogs, Sticky Sessions and AWS WAF(Web application Firewall)

AWS IAM

IAM

IAM(Identity access management is the service where you manage uses groups and access policies for your AWS account.You can manage

  • Users
  • Groups
  • Roles
  • IAM Access Policies
  • API keys

Specify a password policy as well as manage MFA requirement on per user basis.

By default, any IAM user created do not have any access. We have to provide access explicitly.

When a new AWS root account is created, it is  ” best practice” to complete the tasks listed in IAM under ” Security Status”

  • delete your  root access keys
  • Activate MFA for your root account
  • Create individual IAM user
  • Create user groups to assign permission
  • Apply the IAM password policy

To access the console login link for a user in AWS go to IAM>USERS> and click on the username for which you want to get console link. This will take you to user summary page. Click on “Security Credential” Tab in the summary and here you can see “Console login link”

IAM Policies

  • A policy is a document which states permissions for a particular user.
  • An explicit deny policy will always override the explicit allow policy. This is useful if a user has 10 policies attached and granting him access to various resources, then a single deny policy for all the resources will override them all. We do not need to individually remove each allow policy.
  • Some predefined policy templates are provided by the IAM which can be used straightforward
  • Administrative access: Full access to all AWS resources.
  • Power user access: Admin access without user group management
  • Read-only access:  AWS view resources policy.
  • Policies can never be attached to the AWS resources.We have roles for that purpose.

Users

  • Users can be created in AWS from IAM resource.
  • For a new user created there is no access provided to the same. All access need to be provided by means of policies attached to the user.
  • When creating a user, policies can be attached or when after creating the same you can attach policies.
  • Multiple policies can be attached to the same user
  • User credentials shall never be stored on the EC2 instances.

Groups:

  • We can make user groups in AWS using  IAM. We can even apply policies to the groups which will also take effect on the users.
  • Groups are the better way to manage several users altogether. Using Groups allows managing AWS resources more easily.

Roles:

  • In case you want to work with AWS services and wish to assign then access to the resources, you cannot do same using AWS policies as policies cannot be attached to the  AWS services. For example, how will you provide access to an EC2 instance to AWS S3 bucket? You need roles for that.
  • A role is something that another entity can assume, and when doing so, it gets specific permissions defined by the role.
  • You can even store your access credentials on the EC2 instance and use the same to access S3 from it but that will be a wrong practice. You should create ROLE with S3 access and make EC2 instance to assume that role.
  • Other users(NON-AWS)  can assume a role for temporary access to AWS accounts and resources through having something like an Active directory or single sign-on.
  • using roles we can also provide cross-account access, where a user from one account can assume a role with permissions in other accounts.
  • Earlier for EC2 instance you must had to attach the role during instance creation process and could not modify that later on. But now you can attach a role to EC2 instance even after creating it or you can also change the existing role.
  • You can only attach one role to an EC2 instance.

IAM Security Token Service(STS)

  • STS allows us to create temporary security credentials, which provide trusted uses access to the AWS resources.
  • These temporary credentials are short-term and can be active from few min to several hours based on requirement.
  • STS is only accessible through API and cannot be contacted through AWS dashboard or console.
  • When we assign roles to the resources , it’s basically STS that works in the background.  For example, when a role is assigned to the EC2 instance for accessing the s3 storage, it’s basically STS that helps EC2 to access the S3.
  • Here role asks the  STS to generate temp access for the EC2 on S3 as per the policy attached to the role.
  • Third party authentications also use IAM security token service to provide access.

IAM API Keys

  • API access keys can be used to make programmatic access to the  to the AWS services and resources from:
    • AWS SDKs
    • AWS  CLI
    • AWS windows  tools for PowerShell
    • Direct HTTP calls
  • i.e. You can use IAM access keys to connect to AWS resources through CLI, sitting in your own network.
  • Always note that API keys are only available one time:
  • When you create a new user or  have to reissue the keys
  • In AWS console if you try to check access keys , you will only find access key id but not secrete key ID.

 

 

 

 

 

 

 

 

 

 

sudo versus su

su

When a user uses su command, he is prompted for the root credentials once he provides the same, he is given the root shell and now he is having unlimited power of root. Still, if SELinux is enabled this behavior can be controlled.

Usually, linux system shall be configured in such a way that only certain users shall have access to the su command. To disable command for all make sure there shall be no user in the wheel group on the system. Then go to /etc/pam.d and edit the “su” file uncomment the following line

auth    required        pam_wheel.so use_uid

To provide permissions to any specific user for executing this command, add them to wheel group now.

Sudo

The sudo command offers another approach to giving users administrative access. When trusted users precede an administrative command with sudo , they are prompted for their own password.Then, when they have been authenticated and assuming that the command is permitted, the Administrative command is executed as if they were the root user.

The sudo command allows for a high degree of flexibility. For instance, only users listed in the /etc/sudoers configuration file are allowed to use the sudo command and the command is executed in the user’s shell, not a root shell.

  • Each successful authentication using sudo is logged to the file /var/log/messages
  • Command issues along with sudo are logged to the /var/log/secure file along with the name of the user who triggered the command.

Use visudo utility to edit the sudoers file

visudo  -f /etc/sudoers

Additional logging for sudo.

use the pam_tty_aud it module to enable TTY auditing for specified users by adding the following line to your /etc/pam. d /system-auth file. Following configuration will enable TTY auditing for the root user and disable it for all other users:

session required pam_tty_audit.so disable=* enable=root

Note:- Configuring the pam_tty_aud it PAM module for TTY auditing records only TTY input. This means that, when the audited user logs in, pam_tty_aud it records the exact keystrokes the user makes into the /var/log /audit/audit.log file.

Chef Installation and setup

So today we will learn how to set up our battleground or chef environment. After finishing this tutorial you will end up

  • Configuring fully functional Chef server
  • Configuring  Your workstation
  • Adding test nodes to your Chef environment.
  • Basic experience   with chef and knife tools

So lets hit the track and spin up the environment.

Below is the briefing about the environment that I am going to configure here.

Server TypeServer-configurationHostname
Chef-ServerCentos-7,1.5GB RAMmaster.devops.com
WorkstationUbuntu 16.4 .1.5 GB RAMworkstation.devops.com
NodeCentOS-7,1GB RAMnode1.devops.com

Chef-Server

The Chef server is the central location which acts as an artifact repository or “hub” that stores cookbooks, cookbook versions, facts about the node, data bags and metadata information about nodes.

All metadata of a node, when it is registered with Chef server, is stored on the Chef server.The metadata is populated and sent to the server with chef-client, which is an application that runs on the node. (Covered in later lessons.)

Configuration enforcement is not handled by the Chef server, instead, the desired state configuration is enforced when chef-client runs and a “convergence” happens, allowing for easy scalability.

Chef Server Components

Clients: Nodes and workstations which access the Chef server for either configuration enforcement or uploading and the management of cookbooks and other Chef data.

Load balancer: All requests to the Chef server are routed through the Nginx front-end load balancer.

Chef manage : Chef manage is the GUI-/Rails-based web application for managing Chef server.

Chef Server: The server engine which powers Chef server.

Bookshelf:  Used to store cookbook content such as files, recipes, templates, etc.

Message Queues: Are used (using RabbitMQ) to send data to the search index and data stores.(Search indexes covered in later tutorials.)

PostgreSQL: The data storage repository for Chef server.

Chef-Server Installation

Step1: Download chef server RPM

wget https://packages.chef.io/files/stable/chef-server/12.16.14/el/7/chef-server-core-12.16.14-1.el7.x86_64.rpm .

chefserverinstallation1png

Step 2: Install the rpm

rpm -Uvh chef-server-core-12.16.14-1.el7.x86_64.rpm

chefserverinstallation2

Step3: Run Chef reconfigure using command

chef-server-ctl reconfigure

chefserverinstallation3

This will reconfigure our chef server

chefserverinstallation4

Step 4: Create a user to connect with the chef server

chef-server-ctl user-create <username>  <firstname> <lastname>  <emailid>  ‘<password>’ –filename <file_to_store_rsa>

chefserverinstallation5

Step 5: Create an organization and associate user with same

chef-server-ctl org-create devops ‘devops Batch’ –association_user akash –filename devops-validator.pem

Step 6: Bring up chef GUI

chef-server-ctl  install chef-manage

chefserverinstallation6

Step 7: Run chef-server-ctl reconfigure again

Step 8: Reconfigure chef manage [press  enter when asked to accept license, then press q and write yes and hit enter]

chef-manage-ctl reconfigure

chefserverinstallation7

Step9: Once chef manage is installed, access the chef console on your browser. If not accessible make sure firewall is turned off.

chefserverinstall8

Step 10: Login with the same username and password that we created in step 4, if some error occurs during login, refresh the page.

WorkStation Installation

will be published tomorrow…

Chef Architecture

Before you start working with chef, it’s important to understand the chef architecture so that you can have some idea about the workflow that we will be following during chef learning path.

At top level view, chef setup can be divided into three main components as shown in below image.Chef Architecture

Chef workstation: Chef workstation is the location where you will interact with chef server. All you need is to install ChefDK package.It can even be your windows laptop. On your workstation, you will be writing all the code to manage the configuration of various nodes that you will be controlling. For This learning series, I would recommend you to make Ubuntu system as your workstation. As of now, you do not need to worry about setting up your workstation as later in the course we will be working on configuring the same.

Chef Server: Chef Server acts as the controlling engine of your chef environment and stored all the data related to your environment. When working on your workstation, once you are done writing code for configuration management(we call this as developing cookbooks, will explore later in details but now onward we will use the term cookbook) you upload the same on the chef server. We will configure the chef server later in this course.

Chef Client nodes: These are the machines that are managed by chef. We will install chef-client packages  on all the nodes. This chef client running on the nodes, connects to the chef server at regular intervals and pulls the cookbook that you have uploaded to the server from your workstation. Once it pull, it runs the same on the node and configures the same.

Now at this point we have not discussed as how does a client running on the node determines as which cookbook it has to pull and run. We will discuss all that later in the course but for now let us try to understand the chef architecture.

Since you have some very basic  understanding of chef environment, lets dig deep into each of the above defined components(workstation only, we will cover other subcomponents later in the course) and try to identify  other subcomponents that they are made up of. Before I move further have a look at below diagram, it shows various chef components and subcomponents and how they are related with one another.Overview of Chef

Chef Workstation and its components :

When we install ChefDK on the workstation, it installs everything required to start with chef.

Ohai: Ohai is the utility that collects all the workstation information. Typically ohai collects all sort of information related to your machine like hostname, processor info, disk info, ip addresses.

chef  utility:  Used to  create your cookbooks and interact with the same.

Knife Utility: You will use the same to  interact with the nodes or work with the chef server from your workstation

Test kitchen: You will use test kitchen  to automatically test cookbooks on your workstation.

Chefspec: Frame work to write test cases for your cookbooks. You mainly perform unit testing for your cookbooks using the same.

Cookstyle: Utility to validate your cookbooks for any syntax error.

Foodcritic: Utility to validate your cookbooks  and check code styles.

InSpec: Framework to write integration tests for your cookbooks.

Cookbooks, recipies, tests policyfiles are other components that you will be writing on the workstation and  uploading to the chefserver. We will do hands on labs on all of these components and will try to learn them.

Give some time and understand various components used in chef.Once done, move on to next tutorial Installation and setup. If you want to know more about all of the components, Here is the chef official documentation

Understanding OSI

OSI model is also known as Open Systems Interconnection model is a standard in networking to create other standards of networking domain. You can take it as a framework to create networking standards.

When we talk about OSI model, a basic diagram as shown below comes to our mind which tells about various layers that are included in the OSI model.

basics_osimodel

Understanding of OSI model really helps in troubleshooting the network issues. Each layer of OSI model holds significant importance and defines its own set of protocols. Various devices being used in the networking domain may fall under one of the layers of OSI model.

Knowledge of OSI model helps to identify as in which stage of network transmission an issue occurs. When dealing with layers, you usually call them with numbers i.e. layer 1, layer 2 etc.

Layer NumberLayer nameTo remember in Order
Layer 1Physical layerPlease
Layer 2Data Link layerDo
Layer 3Network layerNot
Layer 4Transport LayerThrow
Layer 5Session LayerSauce
Layer 6Presentation LayerPizza
Layer 7Application LayerAway

Today in this tutorial we are going to understand the OSI model and its layers with some practical examples. To start with consider below-mentioned diagram.

network_OSI

Let’s say I am using a banking application and I want to transfer 50 bucks to some other account. In that case, let’s see how this all process takes place and how different layers are involved in the same.

Layer 7 (Application Layer):- All applications that are running on the system and interact with the network comes under the layer 7 for example for example if you are playing counter strike game, then it comes underlayer 7.In the same way, the banking app that I am using to do the transaction falls under the layer 7.It interacts with the internet using some API’s which are themselves taken care by the OS on which this app is running.

Layer 6(Presentation layer):- Before sending the data on to the network, it shall be well formatted as per the standards defined for that data. For example, if I am trying to send an image may be jpeg, png or whatever format defined for the images. Such type of thing is handled by the presentation layer. This thing is also managed by OS by its own.

Layer 5(Session layer):- Session layer is also managed at OS level, here user sessions are managed. In banking application as soon as I log in, my user session is created. This functionality is part of the session layer.

From layer 7 to 5 all the things are handled at OS level but in next layers, we will now deal with the network. The information of transferring 50 bucks has left the system and now we will see how this info will reach the end server.

Here comes the role of layer 4 (Transport layer): At this layer following decision is made.

  • Shall transmission be reliable or non-reliable(TCP vs UDP)
  • What is the source port and destination port

Layer 3(Network Layer): Once the decision to above question and some other homework is done, Things goes to the Network layer, here information regarding logical address is provided. In our banking app example, the information regarding IP of the end server is provided at this layer. Devices like routers fall into this category.

Layer 2(Data Link layer): So now at this time info regarding the transfer of 50 bucks has reached the Router B of the example diagram shown above. Now there may be several systems communicating with router B. So router B needs to decide as to where send the packet so work is done on retrieving the mac addresses of the devices. So at this layer work is done on physical addresses i.e. MAC addresses.

Layer 1(Physical layer):  Now info has reached the server but it needs to read the same and computer understands only binary. So at this layer dealing is done with electric signals and information is converted into binary format. Modems are the layer 1 devices which do the job. So now info regarding 50 bucks transfer has reached the server and stored on the server.

To conclude, OSI model is only a reference model and may not always be followed when working in networking domain. It is a conceptual framework so we can better understand complex interactions that are happening in the networks. As using this OSI model, I can easily understand how data is being transferred into the server from app taking into account various complex protocols and processes.

Continuous deployment,micro-services and containers-three champ

continuos-delivery-pipelineAt first continuous integration, microservices and containers seem to be isolated topics. Also with DevOps implementation, it’s also not a hard fast rule that CI is only possible if you use microservices or to run micro-services you need to implement container technology. All things can work independently. But in case if you bring these three gems of DevOps toolchain together, new doors of hope and software delivery methods come into light.
With the combination of microservices and containers, we can remove several of the problems that use to exist earlier when working on microservices. Also the concept of immutable deployments helps us further to work easily with microservices.
This implementation allows us to fast test and develop a product which helps us to improve our CI and CD pipelines.

Difference between Continuous Integration, continuous delivery, and continuous deployment.
Continuous Integration: Continuous integration (CI) usually refers to integrating, building, and testing code within the development environment. Whenever developer is developing the code he is required to commit his code back to the shared repository frequently and how frequently? That depends on the project requirement. As soon the code is committed back to the shared repository, it is tested that this new piece of code is not breaking the existing software. Usually, this is done by automated test suits which run on the new piece of code and validate that it’s not breaking anything in the existing build. We create automated ways of testing this new code and its integration with existing and usually call these all steps together as integration pipelines.
On running the pipeline, either your build is going to pass or fail, in case it fails you need to work on your changes and start all over again. This pipeline should run at every commit or push to the repository.
Even if continuous integration pipeline is a success but still you cannot be sure on the production readiness of the build. All that you din in CI pipeline is that you verified that current changes are not breaking the existing test cases and running code is good but still there can be many more things that need to be addressed before you can push your changes to the production.
Below are some steps which shall be there as a part of CI

  • Pushing code to repository
  • Static code analysis
  • Pre-deployment testing(Unit tests, Functional tests)
  • Packaging and deployment to the test environment
  • Post-deployment testing.(Functional, integration and performance tests)

Integration tests shall always be committed along with the code that is being implemented. To ensure that this happens, you can switch to a development strategy called as TDD (test driven development)

Continuous delivery: The continuous delivery pipeline is in most cases the same as the one we would use for CI. The major difference is in the confidence we have in the process and lack of actions to be taken after the execution of the pipeline. While CI assumes that there are (mostly manual) validations to be performed afterward, successful implementation of the CD pipeline results in packages or artifacts being ready to be deployed to production.
In Simple terms, we can say that each build that passed the continuous delivery phase can be deployed to the production but to actually deploy the same to the production environment can be a political decision.

Continuous deployment: Continuous deployment moves one step ahead and automatically deploys the package to the production environment without manual intervention. All builds that have passed the verification are deployed into the production environment.

Microservices: For any feature introduces the meantime to release for the same into production shall be minimum may be from a day to few days to weeks. The smaller the meantime to release the more benefit an organization will get and will always be ahead of its competitors in the delivery of new features into the market.
Speed can be accomplished through multiple ways. For example, we want the pipeline to be as fast as possible both in order to provide quick feedback in case of a failure as to liberate resources for other queued jobs. We should aim at spending minutes instead of hours from checking out the code to having it deployed to production. Microservices can help accomplishing this timing. Running the whole pipeline for a huge monolithic application is often slow. Same applies to testing, packaging, and deployment. On the other hand, micro-services are much faster for the simple reason that they are much smaller. Less code to test, less code to package and less code to deploy.

Containers: Before the onset of container technology it was really difficult to work with microservices, It was always an easy way to work with the monolithic application as after setting some standards you had to build only a single package and run the same. Working with few micro-services was also an easy option but when the number of micro-services rises to multiple of 10, it becomes difficult to manage microservices.

They might use different versions of dependencies, different frameworks, various application servers, and so on. The number of stuff we have to think about starts rising exponentially. After all, one of the reasons behind microservices is the ability to choose the best tool for the job.
With container technology like that of Docker which you can spin up quickly and can quickly build up environments for your app in a fraction of minutes. Inside the same, you can run your services also. The way containers accomplish this type of a feat is through self-sufficiency and immutability.
Adding some orchestration tool along with containers really make working with microservices really simple and fast as compared to monolithic applications

Conclusion: Continuous deployment, micro-services, and containers each are great tools and used together they open up new doors of possibilities.
With continuous deployment, we can provide continuous and automatic feedback of our applications readiness and deployment to production thus increasing the quality of what we deliver and decreasing the time to market.
Microservices provide us with more freedom to make better decisions, faster development and, as we’ll see very soon, easier scaling of our services.
Finally, containers provide the solution to many of deployment problems; in general and especially when working with microservicescontinuos-delivery-pipeline. They also increase reliability due to their immutability.

Monolithic vs micro services

microservices vs microservicesMicroservices is a way of building a single application which is composed of many small services. These all services are independent of one another and also tested and deployed independently of one another.

In order to communicate with each other as a part of the single application, these micro-services communicate with each other using APIs that each micro-service exposes.
Common characteristics of micro-services can be mentioned as defined below.

  • They do one thing or are responsible for one functionality.
  • Each microservice can be built by any set of tools or languages since each is independent of others.
  • They are truly loosely coupled since each micro-service is physically separated from others.
  • Relative independence between different teams developing different micro-services (assuming that APIs they expose are defined in advance).
  • Easier testing and continuous delivery or deployment.

The biggest challenge with the use of microservices is to decide as for when we can use them. When building any product, Initially its quite simple and at that time it’s hard to visualize the benefits of using micro-services, one does not understand what problems a microservice implementation can resolve, So often development starts as a monolithic and when problems begin to arise and teams see use of micro service way of development as solution, but at that time implementation difficult due to cost in converting the monolithic app to a microservices based application.

Monolithic vs micro services
With enough data on the internet about microservices, it seems that microservices are a better option as compared to the monolithic applications but there is also a downside of using microservices and one has to be careful when designing architecture of an application to be developed as a group of micro-services.
The disadvantage of using micro services is increased operational and deployment complexity

Downside:
Various components in microservices communicate with each other using remote API calls which of course are slower than the internal calls to classes and methods as in case of monolithic applications.
This problem with microservices cannot be removed as this is the standard way in which micro-services actually work. But the careful division of the application into microservices like based on functionality can help to avoid such problems to some extent.
Upsides:

Innovation: As various components of the application are running as micro-services which usually communicate over APIs, so you are provided full freedom to choose the language of your own type, all you have to do is to expose some APIs which other micro-services will use. Freedom promotes innovation.
Flexibility and its lot: When building a monolith one has to plan for the language type one is going to use, start with an architecture and technologies and they will remain for the lifetime of the application. But sometimes same programing language may not be good for all the problems one may want to test a new feature developed in some other language, so a micro-services way of development provides that flexibility as various microservices in the end will be communicating over the APIs without taking care how they are designed internally.

Size: Since microservices are small, they are much easier to understand. There is much less code to go through to see what one micro service is doing. That in itself greatly simplifies development

Scaling: It’s always easy to scale up the application based on the micro-services. As compared to the monolithic app you do not need to scale up the full application, even if you do not want some components to be scaled up, in microservices components being loosely coupled, you can easily scale up the required component.
Deployment and rollback: Microservices being a small piece of code is always easy to build and maintain. Being small, its always an easy task to deploy a micro-services as compared to the big application, also in case of an issue the pain point area is already isolated as you have already divided your application, so it’s easy to find as which area is buggy. In case we realized that there is a problem, that problem has a potentially limited effect and can be rolled back much easier. With micro-services continuous delivery or deployment can be done with speed and frequencies that would not be possible with big applications.

Below is the diagram showing the possible architecture of a monolithic app.

monolithic-app

Possible architecture of a microservice based application

microservices