A fundamental part of our company culture is to provide engineers the Freedom to Develop. Our view is that the best engineers are the ones that want to acquire a variety of skills and never stop learning. This means finding challenging problems and building an environment of mentorship where programmers never stop developing their skills and growing their expertise. But Freedom to Develop means more than just interesting problems and educational opportunities. It means encouraging engineers to be creative and take risks. When we design software we design for failure; the more of a safety net we build into our applications, the safer an engineer feels in innovating. The Lineate LunaVision Architecture is built around the precept that systems and hardware sometimes fail, and so do humans. Each component of an application should behave predictably in the face of failure. Serverless architecture is a natural fit for this.
Serverless provides independent units of functionality that are provisioned as discrete entities. This segmented architecture is the preferred deployment model for Lineate applications, whether hosted natively, deployed on the virtualized cloud, or using serverless. The key advantages of serverless are that it forces such a design by default, and offloads the infrastructure management of each component to the cloud provider. By treating the infrastructure plumbing as a utility, it allows us to focus on building valuable features quickly and robustly.
Developing on top of serverless can be a challenge for developers who are used to developing everything as a monolith on their own machines or on a dev server.
Our practice principal, Anton Koval, who has been in key engineering roles at Lineate for 11 years, created a highly detailed 5 chapter guide that provides both tips on the implications of decisions throughout the implementation process as well as a step-by-step walkthrough of going from zero to launch.
Anton identifies several important topics in serverless and walks us through implementing them using real-world assumptions. This is not a bare-bones “hello world” program that gets you started, but learnings based on countless real-world end-to-end deployments of business applications. In this training guide, we cover:
Lambda functions (or Google Cloud or Azure) – design and configuration decisions
Essentially microservices carried to their logical conclusion, lambda functions correspond to specific application functions that are provisioned individually. Such an approach is conducive to building massive scalability and forces an array of best practices that are standard in the Lineate Luna Architecture. But like any technology, there is complexity in things like deployment and initialization, and cold starts -- complexity that can affect the choice of implementation language and other design decisions.
Provisioning the database for real-world use
It is possible to install databases directly on top of a virtualized cloud instance (e.g. Amazon EC2), but the underlying volatility of virtualized storage often makes this a poor choice. So we generally recommend using a cloud-native “serverless” data storage implementation (such as Amazon RDS.) In theory, the cloud provider takes care of managing the hardware and software behind the scenes. But databases are complex, and correctly configuring, securing, and managing a production serverless database requires knowledge and care. Effective management using an API gateway.
Exposing the functionality and data to a web application means exposing it to the world.An API gateway enables us to carefully control the authentication and authorization around the services, as well as to control and monitor their usage. We discuss how to design one securely while managing development sandboxes to allow a team of developers flexibility to iterate quickly and deploy new functionality without interfering with each other.
Automated deployment and instrumentation
Serverless is especially well suited to continuous integration or continuous deployment. If a new change passes automated testing and any other checks that are set up, it is easy to deploy it system-wide in production (and just as easy to roll it back). We go through some best practices for configuring hardware resources, logs, tracing, parallelization, and deployment automation in a serverless world.
The document is highly technical but is built on the real-world experience (and missteps!) of dozens of senior engineers. We hope it can be of use to the engineering community, and welcome any feedback or refinements.
If you have any questions or thoughts, feel free to reach out to us!