Serverless microservices aren't exactly new.
But they are becoming more and more relevant — especially as technology leaders look for ways to make their software more performant, more cost-effective, and more release-friendly.
If you're considering using them for your application but you're still on the fence — that's understandable. They're architectures that shine in a particular set of circumstances, and come with pitfalls and due considerations of their own.
At Lineate, we've worked with serverless microservices for years, and we've developed a pretty good handle on when they should be used, when they shouldn't, and what kind of things to look out for if you're actually going to implement them — which more and more companies are choosing to do. So, in this 5-6 minute read, we'll dig into all of the above.
For the uninitiated: Serverless microservices are fundamentally similar to traditional microservices. The same trade-offs and architectural considerations apply — but being serverless, they're decoupled from the constraints on rapid scalability and infrastructure maintenance that usually come from deploying large numbers of traditional microservices.
Microservices allow each of your teams to work on updating their subset of the larger application without testing or deployment conflicts during testing and deployment. Serverless makes it easy to automate and right-size those server instances, which can add up to tremendous cost savings.
Plus, you get the combined benefits of the on-demand horizontal scalability of microservices with the easy auto-scaling of serverless. That means there’s no need to provision and maintain dedicated hardware or virtual machine instances. And when you want to upgrade, tweak or maintain your applications, using serverless microservices means your builds are less likely to break when you make changes.
It's a tricky question to answer straight out — there are a ton of interrelated factors to consider when designing or redesigning applications and architecture. So, here's a cheat sheet to see if serverless microservices are appropriate for your particular use case.
You have a Legacy app with scalability issues
Carving out feature sets into discrete services and enabling serverless infrastructure will be the least disruptive course of action and help you get some solid value quickly.
You have a Legacy app with a large dev team
This will enable your developers to have higher throughput without being blocked by one another.
You run many intensive, relatively short running batch jobs (reporting for example)
It'll let you offload your intensive, expensive work to dedicated hardware so as not to interfere with the systems performance for other users.
You are processing a very large amount of data that takes hours
Using an event-driven orchestrator pattern, you can dramatically simplify the maintenance and configuration of your long running batch processes. Keep in mind serverless functions are not the right choice for executing these processes but are perfect for responding to events and launching the appropriate processes as a result.
You have infrequent spikes in workload
Depending on the size of your dataset, consider using an event-driven orchestrator pattern.
You are processing a medium-sized data set in near real time
It highly depends on the data set size as the serverless functions should take a few minutes at the very most — if it takes longer than that, consider an orchestrator pattern.
You have many discrete feature sets with a high level of communication overhead between them
So, if you have a large number of functions and a very low latency requirement — then one of the first optimizations you'll want to do is remove any sort of network or disk-based communication between those functions. Doing this immediately rules out a system based on interacting serverless functions as a solution.
You're building a greenfield app
It'll be fastest to validate your app/MVP if you build it as a monolith, keeping in mind future states.
When you want to crunch huge amounts of data within long-running processes, serverless functions are generally not what you want to go for.
That said, they can act as useful orchestrators in those situations. In which case, you'd be taking an event-driven approach to your data processing. That is to say: An event triggers the initial processing of a large batch of data by invoking a serverless function which kicks off a batch process that actually handles the work. As that process completes, it can invoke one or more additional serverless functions to perform subsequent orchestration of downstream processes.
This type of event-driven architecture, which is a natural fit for serverless functions, is much more robust and much less error-prone than systems that rely on a series of independently scheduled jobs to process data through a workflow. In the latter case, you're having to juggle time buffers between output-dependent jobs — which is headache-inducing in the best of times.
As with microservices in general, you've always got to take into account the higher latency of communication that will exist between services when compared to services running within a single application instance.
This is a classic problem of right-sized granularity in the design of your architecture. If anything, when deploying on serverless, you've got to pay extra attention to this, particularly with respect to the cold start latency of serverless systems.
Next, and just being real here: migration from one architecture to another architecture is always a hassle. But migration from a monolithic architecture to a microservice architecture can be more forgiving in that it allows for a highly iterative approach to migration. That is to say, you can migrate portions of legacy architecture out to a microservice in a piecemeal style, so the whole big-bang-style of deployment isn't necessary.
This allows you to start harnessing the benefits from the architectural change early — and it allows your engineering teams to significantly reduce the overall risk of migration. After all, it's much better to fail a little bit early on than it is to fail spectacularly at the end of many months of work.
Put simply: serverless microservices are growth-friendly. The computational flexibility and the baked-in scalability is understandably appealing to companies that are looking to continuously evolve and build out their systems.
But there's more to it: there's a growing mass of third-party library services that can be incorporated into a system without the need to roll in a new commodity service from scratch.
As serverless microservices get more and more user-friendly, that means less technically-adept users will still be able to leverage a mass of computational resources to solve problems that they would not otherwise. Think, for example, of computationally intensive analytic functions built into spreadsheets that are actually performing the computation across multiple concurrent invocations of a serverless microservice — and all the while, it's presented to the end user as just another function that can be used directly within the sheet that they are working with.
Ultimately, these are the major reasons we at Lineate often choose to implement serverless microservices in our work. And hey, as useful as they can be, sometimes monolithic architecture is the right call. It's all about the unique specifications of the job — something we characteristically obsess over before getting deep into any project.
Want to work with Lineate and find the ideal solution for your
Contact us for a quick introductory chat.