How record save happens in Ember?

In Ember when you call destroyRecord on model. In the network layer Rest Api Method delete is invoked and your model’s id is passed in your params, I believe everyone knows this stuff.

Let’s dig deeper, what happens after you call the object method destroy record or delete record. To understand more let me define a term, I and many others might have used this one, internalmodel, In general internalmodel is a term which is defined by the ember data, it is a Model class defined internally never suggested to use in application code. Now the question is why do we need to understand about internalmodel, well whenever an object is created ember generates an internalmodel of the object like a basic structure.

When model’s deleteRecord is called first operation invoked is deleteRecord function of internalmodel, it changes the state of the model to deleted.uncommitted. After deleting the internalmodel (that is changing its state as mentioned above), generally the next is to save the state or persistent state of the model to backend layer.

Something interesting, save method is only a filter for your model and call next step of actions. It itself cannot do anything. So what is this filter and what are these actions which are called. In general when you invoke save on model, how can ember understand whether to create a record, update a record or a delete record. It is not viable to have save function do all the jobs, It is best if it can pass on the baton on to something which is related to model.

The save function on the basis of the states assigned (need to look into different states of internal model) in internal model it calls update, delete and create functions which are related to model.

I believe now you are thinking that the above three functions will be common for every model and it will be ajax request. I also thought the same but no, why is another interesting question? I think it is conventions. Ember design follows conventions over configurations. Whenever one of the function is called, the name of the model is used to search for a file in adapter folder to load the file and call its corresponding function, yes it is adapters which is our house gate. I believe most of you understand what an adapter is I will not be covering this.

Now you might be asking me a question, in my entire time in Navyug I have never written an adapter or only specified an application adapter, how come my application is working fine. It should break when it trying to load the adapter of a particular model, but it doesn’t. Well ember is smart if it doesn’t see the corresponding model’s adapter, it will take the application adapter, if it doesn’t find the application adapter it uses internal model adapter defined by the ember data.

Generally many people don’t use individual model adapter, they use common adapter. The individual model adapter is normally used if you want to override the existing operations, the create, update and delete operations.

Below I have added an example of the method overriding for delete function

/* adapters/user-category-activity-mapping-report.coffee */
import ApplicationAdapter from ‘frontend-upgrade/adapters/application’

userCategorActivityMappingAdapter = ApplicationAdapter.extend(

deleteRecord: (store, type, snapshot)->
id = snapshot._attributes.activityHash
@ajax(this.buildURL(type.modelName, id, snapshot, ‘deleteRecord’), “DELETE”)

)

export default userCategorActivityMappingAdapter

In the above example, I have a model which is a JSON object created on the basis of clubbing together all objects which have common activity_hash in backend, for me deleting this single object is equivalent to deleting all the records with same activity_hash in backend.
Instead of sending randomly generated id to delete, I need to send the activity_hash to delete all the bunch of objects in backend after deleting success callback is called as similar as before. Similarly you can modify create, update functions of your model to accommodate extra variables.

Automating Registration by OTP from SMS

There is an android app / website:
mysms.com, it gives us free SMS push service, where you can get your text messages directly on a website.

Pre requisites to use this method:

1)You should have an android phone
2)Phone should be connected to Internet, when script is running
3)Phone should have an app “MySMS” installed
4)User should have an account on mysms.com with that phone number

How it works?

Whenever any message comes in your inbox, it will push it to your web account in fraction of seconds and then using your automation scripts you can retrieve the message info from the webpage (which will have your OTP).

Setup one time, and use it for many times..

Advantages of using this method:

1)Physical presence of your phone is not needed.
2)You don’t need any extra appium script to automate your phone messaging app.
3)You will get your OTP by hitting a simple Web URL.
4)It will be easy to implement and fast for sure.

Limitations of this method:

1)Its depends on Mobile network, if there is no cellular network, you won’t get any OTP.
2)You need to have a dedicated android device /phone number in order to use it (this app is not free for iPhone devices).

I have tested this feature on my android phone and its working fine. There are other apps available in market which provide you same kind of SMS push services, you can pick any app.

Also, for those who want to automate the OTP for any mobile number outside India, they can use this website – http://www.receive-sms-online.info/

Looking forward to get some more efficient methods to resolve this problem. Please reply if anyone has any..

Cordova Android/IOS google login using google-plus cordova plugin

As everyone knows that the google authentication through web views is going to be deprecated, where the google developers are suggested to use google sign in api for native android developers. In cordova based applications we can implement cordova google plus plugin which wraps google sign in and makes it accessible to us in JavaScript.
I will first describe general app flow which I implemented in InAppBrowser for understanding what is needed and then how to implement google plus for android and IOS.

General App Flow using InAppBrowser

1)Open https://accounts.com/o/oauth2/auth? client_id=…&response_type=code&scope=…&redirect_uri=..
in InAppBrowser, which opens login page of google, on logging it will redirect to our redirect url with auth code

This step will not to be supported anymore, google is going to block requests made to the following url
2)Bind ‘loadstart’ event, copy the auth code from the URL of the window when login completes and redirect occurs, Close InAppBrowser.
3)Make a POST request using ajax to google authentication to get access token from auth code.
4)Send Access token to backend to register/sign_in user

In my app I had to remove step 1 and step 2 as InAppBrowser can no longer make request to get auth code.
To get the auth code I used cordova google-plus plugin. I took following steps to implement google login in Android and IOS.

Android

1)Open developer console, go to credentials page. Under create credentials create new oAuth ClientId. Select Application type android.
2)Fill in the Signing-certificate fingerprint value as per the instructions. Click on learn more how to create fingerprint for development debugging app and production release app. Add your package name as per the instructions.
3)Copy the client ID xxxxxxxxxxxxx.apps.googleusercontent.com created for android.
4)In config.xml add the following code:
The value is the reverse of the client id as generated for android which you copied in step 3. Run cordova build android in cordova directory. The google-plus plugin is successfully installed in your application.

After installing google-plus plugin

Google-plus plugin provides following method to make request to google and get auth code. The webClientId is client Id generated from oauth client id for web, it is completely different from the client ID generated in step 1.

window.plugins.googleplus.login(
{
‘scopes’: ‘… ‘, // optional, space-separated list of scopes, If not included or empty, defaults to `profile and`email`.
‘webClientId’: ‘client id of the web app/server side’, // optional clientId of your Web application from Credentials settings of your project – On Android, this MUST be included to get an idToken. On iOS, it is not required.
‘offline’: true, // optional, but requires the webClientId – if set to true the plugin will also return a serverAuthCode, which can be used to grant offline access to a non-Google server
},
function (obj) {
alert(JSON.stringify(obj)); // do something useful instead of alerting
},
function (msg) {
alert(‘error: ‘ + msg);
}
);

The success callback provides the google auth object which contains following values.

obj.email // ‘eddyverbruggen@gmail.com’
obj.userId // user id
obj.displayName // ‘Eddy Verbruggen’
obj.familyName // ‘Verbruggen’
obj.givenName // ‘Eddy’
obj.imageUrl // ‘http://link-to-my-profilepic.google.com’
obj.idToken // idToken that can be exchanged to verify user identity.
obj.serverAuthCode // Auth code that can be exchanged for an access token and refresh token for offline access

The google auth object in success callback has obj.serverAuthCode which is used to generate the access token. To generate access token I made the following request in frontend.

$.post(‘https://accounts.google.com/o/oauth2/token’, {
code: code
client_id: constants.GOOGLE_CLIENT_ID
client_secret: constants.GOOGLE_SECRET
grant_type: ‘authorization_code’
redirect_uri: “”
}).done(@onSuccess.bind(this)),

When user clicks on google login button, bind an action to button which calls the google-plus plugin method in step 4 to generate access token, all other steps involving register/sign_in from access token are already implemented in boilerplate code.

IOS

Similar to android we need to create oAuth client ID for IOS. The bundle id in oauth credential page for IOS application is the application id you have set in config.xml file (e.g. com.LungeSystems). After creating the oauth client ID, set the value of reversed client id in config.xml file. All other steps are same as the android application.
In IOS please make sure that REVERSE_CLIENT_ID value is com.googleusercontent.apps.xxxxxxxxxxx, i.e. reverse of client id generated for IOS. If it is not reversed client id it is going to throw an error invalid redirect uri, when trying to get auth object (contains auth code value as described in android). I have also seen in the web docs that oauth webClientId option in login function is not required for generating idToken in IOS, but I am not sure if auth code can be generated without web client id, need to test it.

Hardshell Featured on Clutch!

Since day one, the team of software professionals at Navyug Infosolutions has been dedicated to our founding mission and vision. That is, providing value to customers and society as a whole through wisdom, integrity, and technology. We offer a diverse array of services to meet our customer’s needs, including web application development, mobile app development, testing services, ERP implementation, consultancy, and technology migration. We’ve also worked continuously to ensure that the process governing our projects is clear and well-defined. We emphasize process as much as we do product because products alone do not deliver outcomes. That said, we have recently partnered with the B2B market research company, Clutch to ensure that we remain flexible and responsive to our clients.

The companies featured on Clutch constitute many different segments and span various expertise. All of them, however, are evaluated on the basis of qualitative and quantitative factors, including market presence, portfolio, and the clients they serve. Another valuable and central aspect of their business model are the verified client reviews. An analyst directly conducts interviews with a company’s clients, whereby they answer questions related to the scope, cost, and management of the relevant project. The end result is comprehensive and insightful reviews concerning a company’s expertise and work ethic in practice.

We’re grateful to be a participant on this platform and to be amongst so many reputable companies. Our profile effectively captures the spirit and focus of our business. We also want to thank our clients who have set aside time to share their experiences. Without their diligence, we would not now have evidence of our strengths, as well as feedback that reflects on how we can improve. We recognize the value of these reviews to the growth of our company, and we are already very excited by some of the insights revealed by them.

The Founder and CEO of Crosscues commented,
Hardshell goes the extra mile in doing the right thing for the customer.

After helping to develop an app, which acts as a network for patients and doctors, called Patient Next Door, the founder also shared with Clutch,
Hardshell didn’t view my product from a vendor’s standpoint, but rather as something they were building for themselves.

In a world of increasing complexity and emerging technologies, it’s more important than ever to make sure that the customer and their overall business objectives are not lost in this capricious social milieu. Partnering with Clutch builds trust and confidence with current clients, as well as prospective buyers. You can confirm this is true by reading some of our other reviews. In the end, if you decide you are looking to the future of your business and prepared to entrust Hardshell to achieve your vision, contact us today.

Revolutionizing Healthcare Through Internet of Things

An often-quoted figure of Gartner says that “6.4 Billion Connected “Things” Will Be in Use in 2016” and use cases are far too many to be outlined in any one document.

The idea of devices connecting directly with each other is, as Kevin Ashton coined the term Internet of Things puts it as, “a big deal.”

The rapid digitization of business processes and the attendant innovations spur new business models through outcome, reduced risk and metered usage. Industries across segments adopting IoT and business models are changing fundamentally from that.

Some developments happening in this domain are-

Home and building automation

Tools like Nest and Amazon Alexa interact with us, takes data about the home environment and programs themselves to operate efficiently within the context of that information. This technical framework provides energy providers with the connectivity to better manage the energy grid.

Automotive design and manufacturing

The automotive industry is being disrupted with development in the area of sensors, traditional car manufacturers are no longer competing within themselves but entities like Google, Apple and Uber are bringing about disruption in this industry.

In the traditional paradigm designing automated applications into vehicles to provide maintenance monitoring, fuel and mileage management, driver security and other capabilities cost little to integrate but have significant earning potential. The addition of a cloud-based server to analyze the data and automatically act on it—scheduling a maintenance appointment at the appropriate time, for example; would move this further in the direction of the IoT. An already existing use case in Telematics is another example of this.

Public transportation/smart cities

In State of Victoria within Australia the PTV mobile app allow passengers to view service times, use the journey planner and set their favourite stops across the state for faster access to public transport information on the go. This application enables commuters to monitor in real time information on things like metropolitan train information, bus information, cancellation information for metropolitan trains and platform information for metropolitan trains. According to a report in eWeek2 about a Cisco conference call with journalists, “…as more connections are established, the value to businesses and the global economy will only go up.”

Unlike the use cases mentioned above one of the domains where IoT has the potential of transforming lives is healthcare. Enabling healthcare providers remote access to patient data, building platforms that can generate alerts on patient condition etc.

IoT and Healthcare

The implications of IoT in healthcare are opening uncharted territories empowering individuals and healthcare service providers give “the right care for the right person at the right time”, which leads to better outcomes and improvement in satisfaction; making health-care cost-effective.

Three distinct drivers for medical devices’ industry digitization has seemed to emerge from-

The need to increase operational efficiency: i.e., preventive maintenance of devices, remote diagnostics and software upgrades, etc.
The ability to innovate digitally: i.e., the need to digitally communicate vitals and device information.
The creation of industry ecosystems: i.e., the ability to link devices and systems together –implants, wearables, diagnostics, monitoring devices, health records, etc.

The emergence of the IoT: How is it happening?

Integration of Data

Advances in sensor and connectivity technology is allowing devices to collect, record and analyze data which was not possible before. In healthcare, this means being able to collect patient data over time which can be used to help enable preventive care, allow prompt diagnosis of acute complications and promote understanding of how a therapy (usually pharmacological) is helping improve a patient’s parameters.

Automation of data acquisition

The ability of devices to capture data predictively; removes the limitations of human intervention—automatically capturing the data healthcare professionals need, at the time and in the way, they need it.
Examples of IoT and its potential in healthcare:

Clinical care

Hospitalized patients requiring close attention could be constantly monitored using IoT-driven, non-invasive monitoring. This type of solution employs sensors to collect comprehensive physiological information and uses gateways to share the data with cloud based solution to analyze and store the information and then trigger alerts to caregivers for further analysis and review. It replaces the process of a health professional come by at regular intervals to check the patient’s vital signs, instead providing a continuous automated flow of information.

Remote monitoring

There are people all over the world whose health may suffer because they don’t have ready access to effective health monitoring. But small, powerful wireless solutions connected through the IoT are now making it possible for monitoring to come to these patients instead of vice-versa. Telemedicine solutions could be used to securely capture patient health data from a variety of sensors, apply complex algorithms to analyse the data and then share it through wireless connectivity with medical professionals can make appropriate health recommendations.
Essential Capabilities required for IoT

Low-power operation is essential to keep device footprint small and extended battery life. These characteristics helps to make IoT devices as usable as possible. Graphical user interfaces (GUIs) improve usability by enabling display devices to deliver a great deal of information in vivid detail and by making it easy to access information.

Reliability of sensors plays an important part in Healthcare IoT. Unlike other industries which have a degree of fault tolerance. In the healthcare domain, there is no room for error. Manufacturers like Honeywell and Freescale have developed some robust solutions in this area but in smaller scale entities further work needs to be done.

Security and sanctity of information is another aspect that plays a critical role. When, the patient data is being transferred through multiple systems it’s important that this information is secure. FDA and similar norms are ensuring this, but these solutions again should be made more secure, reliable and fault tolerant as seen in the recent example of ‘Wannacry’ malware attack in UK hospitals is an instance of vulnerability.
Looking Forward

The complexity of IoT in healthcare is from a wide spectrum of medical devices using different data communication protocols. The recent example of Huawei and Ericsson not being able to agree to NB-IOT standards is just an indication. We expect interoperability between devices will improve as integration standards become stronger.

Privacy in the changed paradigm also needs be addressed and entities like U.S. Federal Drug Administration (FDA) are leading the path in providing guidance for managing cyber security in medical devices.

How security implications affect Internet of Things

As every player with a stake in IoT is aware, security is paramount for the safe and reliable operation of IoT connected devices. It is, in fact, the foundational enabler of IoT. Where there is less consensus is how best to implement security in IoT at the device, network, and system levels.

Network firewalls, Intrusion Prevention Systems, and protocols can manage the internet traffic coursing through the network, but how do we secure deeply embedded endpoint devices that usually have a very specific, defined mission with limited resources available to accomplish it?
These are critical concern that must be addressed to enable several current and future applications. Existing solutions are often not integrated into the entire system, and sometimes they violate the criteria that designers have taken into consideration from the beginning.

BUILDING SECURITY IN FROM THE FOUNDATION BLOCKS

Knowing no one single control is going to adequately protect a device, how do we apply what we have learned to implement security in this scenario? We do so through a layered approach to security which starts at the inception when power is applied, establishes a handshake of trusted computing baseline, and anchors that trust in something immutable that cannot be tampered with.
Embedded security refers to building security in from the start i.e. security features built into a device. Some of the major building blocks for embedded security for IoT are:

Cryptographic Algorithms
This is the essential foundation of a strong security solution within IoT. The design constraints placed on IoT sensors require lightweight, highly optimized, easily deployable cryptography scheme that provides high levels of security while at the same time minimizing memory usage and power requirements.
Secure Storage
Cryptographic algorithms involve keys as their root for operation. Since the algorithms are published and known to all, which also includes potential attackers. Thus, defending the secrecy of the key is a significant issue for security. Secure Storage fundamentally deals with shielding access to keys and other pieces of data.
Secure Boot
The idea of Secure Boot is to transport the system to a recognized and trusted state. The Secure Boot routine is a ROM-based routine. Therefore, an attacker cannot interrupt the process. Extra features are essential to provide a complete Secure Boot solution.
Secure JTAG
Most of the embedded devices have a JTAG interface for debugging. However, if this is not properly secured, this interface is open to the risk of becoming an attack vector within the solution. Some solutions allow the regulation of JTAG access using One Time Programmable eFuses:

1)Disabled JTAG: This mode provides the highest level of security. All critical JTAG features are permanently blocked. This mode is not always recommended as sometimes there is a requirement of boundary scan which could affect the RMA procedure.
2)Disabled debugging: This mode prevents debugging but allows for the boundary scan functionality to be enabled. This solution is the recommended mode for ensuring the maximum level of security.
3)Enabled: This solution provides the lowest level of security and is the default setting of devices.
4)Secure: This mode provides high security. JTAG use is regulated by a 56-bit secret key-based challenge/response authentication mechanism.
5)Secure Execution Environment (SEE)
It refers to a processing unit which is capable of executing applications in a protected manner.
Secure Execution Environment refers to a plurality of distinct virtual machines that are created and operate simultaneously and distinctly from one another. This includes at least one virtual machine to implement trusted guest software in a secured memory area and another virtual machine to deploy a non-trusted guest operating system (OS) parallelly in an unsecured memory area.

END-TO-END SECURITY SOLUTION: The Way Ahead….

Security at both the device and network levels is critical to the operation of IoT. The same intelligence that enables devices to perform their tasks must also enable them to recognize and counteract threats. Fortunately, this does not require a revolutionary approach, but rather an evolution of measures that have proven successful in IT networks, adapted to the challenges of IoT and to the constraints of connected devices.

Top Web Developer in Los Angeles: Hstpl Recognized

Developing strong and reliable software, while providing quality service to clients, is what Navyug Infosolutions strives to achieve. We work tirelessly to ensure the projects we work on align with the desires and specifications from our clients. This is why we are pleased to share our hard work has been recognized by Clutch. We have been named a top web developer in Los Angeles!

Microservices also frequently called “Microservices Architecture” have started gaining traction in the last few years. It explains a way of designing software application as stacks of independently established services. There is no certain definition of this architectural style, but certainly, there are common characteristics which revolve around organizations especially in reference to business capability, intelligence in the endpoints, decentralized control of languages and data along with automated deployment. Microservices enables organizations to develop uninterrupted delivery/ deployment of large, complex applications.

Microservices Architecture breaks large software stacks into small, independent and loosely coupled services. Each of the services, it has a separate codebase, which further can be managed by a relatively small development team. Thus, the services can be independently deployed to smaller groups who can efficiently work on the project. At the same time, these small groups/ teams can update an existing service without rebuilding or redeploy the whole application.

Although the separation of the application into smaller parts is comparatively not a new concept. There are other programming models which also caters the same notion, such as the Service Oriented Architecture (SOA). But in Micro Services Architecture, there are new tools and techniques which have made it a success.

When it comes to Microservice architecture, it has its clear benefits for enterprises

Compact and uncomplicated code base:

Since the microservice architecture has the responsibility for only one thing, it tends to require less code. Thus, it is easy to understand, to reason about and has a tremendously low risk of changes.

The effortless process of scaling:

When there is a large, monolithic application, then one has to scale everything together. For example, an application has 2 parts – registration and login subcomponents. One realizes that the problem lies with the registration process, but then in a traditional architecture it alone cannot be scaled. Instead, we will have to scale the whole application which would be a complex and resource intensive process. With the recent evolution in technology infrastructure, for example, AWS; elastic scalability is quite a simple task to build microservices in-case the demand for a particular service is temporarily increased.

Easy to discard:

Solutions constantly keep evolving at a fast pace. What may have been cutting edge yesterday, might be considered outdated and slow today. Or maybe a vendor product that one earlier relied on, does not fit the bill anymore or one wants to move to the open source alternative. Majority of the time, it is easy to start from the beginning using new and modern tools and languages rather than to reuse old and outdated technology. This is where the microservices comes to rescue. It helps in facilitating the whole process.

Easy to Deploy:

In case of monolithic applications, for changing even one line of code requires redeployment of the whole application, at least on a platform like JVM. This could lead to complications for many organizations which can lead risk and disruption. Thus, using microservices makes the whole process easier as the scope of deployment is much smaller. Also, in case any problem arises, one would know how to scale the single issue.

Usage of different technology stack:

The approach used with micro service is to utilize the best available tool and language for the job instead of one size fits all. The same is applicable for databases too. It is also much easier and convenient to work with small teams. Each team can look into one micro service and also access other services through high level API.

Increased Resilience:

In case a monolithic application stops working, then a lot of functionality will stop working. On the other hand, if a microservice stops to work, other functionalities will continue working. Thus, it is easy and simple to build some resilience around smaller service.

Challenges around Micro services for organisations:

One of the biggest challenge that an organisation may face in micro service architecture is to provide a means to troubleshoot a user workflow which will cut across multiple services. It is an issue because there is a lack of stack traces through services

Interservice Communication

Microservices have to closely communicate and rely on each other. Thus, a common communication channel has to be framed using HTTP/ESB etc.

Monitoring Health

Within Microservice every module relies on its own code, platform, and APIs, and they will require designing of multiple teams sometimes working concurrently on various modules of microservices. This requires strong monitoring to effectively track and operate the entire infrastructure, if a service disruption is not identified immediately it becomes difficult to track down issues when they arise

There may be many services that need to be monitored which might be using different programming language.so more microservices have to be created.

Distributed Logging

Logging mechanism for different services will be different which will result in GBs of distributed unstructured data.

Spanning of Transactions

There is a high chance that microservices may result in transaction spanning over multiple database and services. This further might lead to an issue caused at one place to cause a problem somewhere else.

Determining Root Cause

Microservices will lead to extra effort to find the root cause due to distributed logic and data. Although performance related root cause can still be worked out and managed by using tools like New Relic and Dynatrace.

Overlapping dependency between service

It is very difficult to reproduce a problem when it will be removed from one version and coming back with a newer version.

Testing can be complex

Testing in a microservices architecture is not simple. Each service has its own dependencies, as features are built, additional dependencies will emerge. Monitoring these changes becomes difficult, also as a number of services grow, so does complexity. It can be an error in a database, latency within the network or unavailability of service. Microservices architecture should be resilient enough to be able to handle faults. As a result, resiliency testing is a must.

The way Ahead

For Enterprise applications, one worth serious consideration is MicroServices architectural style. A Monolithic architecture is considered to be useful for simple and lightweight applications. Complex applications maintenance will cause a nightmare for any organizations. Despite the drawbacks and implementation challenges of microservices pattern, it is anytime a better choice for complex and evolving application.

Why you should switch to Microservices and are you ready for it?

Recently there is an increase in the popularity of MicroServices but it has been prevalent for a long time. A number of factors can be attributed to the popularity that has led to this trend with scalability being probably the most important one. Utilization of Microservices by “big guys” like Amazon, Netflix, eBay, and others, provides enough confidence that this architectural style is here to stay.

Microservices also frequently called “Microservices Architecture” have started gaining traction in the last few years. It explains a way of designing software application as stacks of independently established services. There is no certain definition of this architectural style, but certainly, there are common characteristics which revolve around organizations especially in reference to business capability, intelligence in the endpoints, decentralized control of languages and data along with automated deployment. Microservices enables organizations to develop uninterrupted delivery/ deployment of large, complex applications.

Microservices Architecture breaks large software stacks into small, independent and loosely coupled services. Each of the services, it has a separate codebase, which further can be managed by a relatively small development team. Thus, the services can be independently deployed to smaller groups who can efficiently work on the project. At the same time, these small groups/ teams can update an existing service without rebuilding or redeploy the whole application.

Although the separation of the application into smaller parts is comparatively not a new concept. There are other programming models which also caters the same notion, such as the Service Oriented Architecture (SOA). But in Micro Services Architecture, there are new tools and techniques which have made it a success.

When it comes to Microservice architecture, it has its clear benefits for enterprises

Compact and uncomplicated code base:

Since the microservice architecture has the responsibility for only one thing, it tends to require less code. Thus, it is easy to understand, to reason about and has a tremendously low risk of changes.

The effortless process of scaling:

When there is a large, monolithic application, then one has to scale everything together. For example, an application has 2 parts – registration and login subcomponents. One realizes that the problem lies with the registration process, but then in a traditional architecture it alone cannot be scaled. Instead, we will have to scale the whole application which would be a complex and resource intensive process. With the recent evolution in technology infrastructure, for example, AWS; elastic scalability is quite a simple task to build microservices in-case the demand for a particular service is temporarily increased.

Easy to discard:

Solutions constantly keep evolving at a fast pace. What may have been cutting edge yesterday, might be considered outdated and slow today. Or maybe a vendor product that one earlier relied on, does not fit the bill anymore or one wants to move to the open source alternative. Majority of the time, it is easy to start from the beginning using new and modern tools and languages rather than to reuse old and outdated technology. This is where the microservices comes to rescue. It helps in facilitating the whole process.

Easy to Deploy:

In case of monolithic applications, for changing even one line of code requires redeployment of the whole application, at least on a platform like JVM. This could lead to complications for many organizations which can lead risk and disruption. Thus, using microservices makes the whole process easier as the scope of deployment is much smaller. Also, in case any problem arises, one would know how to scale the single issue.

Usage of different technology stack:

The approach used with micro service is to utilize the best available tool and language for the job instead of one size fits all. The same is applicable for databases too. It is also much easier and convenient to work with small teams. Each team can look into one micro service and also access other services through high level API.

Increased Resilience:

In case a monolithic application stops working, then a lot of functionality will stop working. On the other hand, if a microservice stops to work, other functionalities will continue working. Thus, it is easy and simple to build some resilience around smaller service.

Challenges around Micro services for organisations:

One of the biggest challenge that an organisation may face in micro service architecture is to provide a means to troubleshoot a user workflow which will cut across multiple services. It is an issue because there is a lack of stack traces through services

Interservice Communication

Microservices have to closely communicate and rely on each other. Thus, a common communication channel has to be framed using HTTP/ESB etc.

Monitoring Health

Within Microservice every module relies on its own code, platform, and APIs, and they will require designing of multiple teams sometimes working concurrently on various modules of microservices. This requires strong monitoring to effectively track and operate the entire infrastructure, if a service disruption is not identified immediately it becomes difficult to track down issues when they arise

There may be many services that need to be monitored which might be using different programming language.so more microservices have to be created.

Distributed Logging

Logging mechanism for different services will be different which will result in GBs of distributed unstructured data.

Spanning of Transactions

There is a high chance that microservices may result in transaction spanning over multiple database and services. This further might lead to an issue caused at one place to cause a problem somewhere else.

Determining Root Cause

Microservices will lead to extra effort to find the root cause due to distributed logic and data. Although performance related root cause can still be worked out and managed by using tools like New Relic and Dynatrace.

Overlapping dependency between service

It is very difficult to reproduce a problem when it will be removed from one version and coming back with a newer version.

Testing can be complex

Testing in a microservices architecture is not simple. Each service has its own dependencies, as features are built, additional dependencies will emerge. Monitoring these changes becomes difficult, also as a number of services grow, so does complexity. It can be an error in a database, latency within the network or unavailability of service. Microservices architecture should be resilient enough to be able to handle faults. As a result, resiliency testing is a must.

The way Ahead

For Enterprise applications, one worth serious consideration is MicroServices architectural style. A Monolithic architecture is considered to be useful for simple and lightweight applications. Complex applications maintenance will cause a nightmare for any organizations. Despite the drawbacks and implementation challenges of microservices pattern, it is anytime a better choice for complex and evolving application.