Quantcast
Channel: Stories by Tal Bereznitskey on Medium
Viewing all articles
Browse latest Browse all 20

Node.js and zero servers

$
0
0
Node.js and zero servers
This post was originally posted on the Torii blog

At Torii, we’ve been running Node.js + Serverless in production for the past 4 years. Here’s a summary of why we decided on this stack for our startup and what we’ve learned.

Deciding on a tech stack

Deciding on the technology stack when founding a new startup is a bit of an art. We decided to list our requirements from a good tech stack and find the most fitting for us:

  • Full stack friendly. A cross-platform language that will allow our developers to be truly impactful
  • Ecosystem. A forward looking stack, one that is on the rise and will be relevant at least for the next few years
  • Productivity. Minimize operations and let us focus on the product.

With that in mind, JavaScript and Node.js were chosen first. The systemIt allows developers to become JS experts and work on the backend and frontend. The Node.js ecosystem is huge and immense talent is available on the market. We also chose React for the frontend, but that’s for another post.

For deployments, we decided to leap over the Docker/Kubernetes space, and bet on Serverless computing. I say bet, because we hadn’t heard of many companies running everything on serverless functions. The promise is a highly available, infinitely scalable and secure environment without the hassle of time consuming operations.

We’ve chosen AWS as it pioneered the FaaS space (Function as a Service) and provides all the required building blocks: functions (AWS Lambda), HTTP gateway (API Gateway) and background scheduling (Cloudwatch events).

Use case 1: Node.js + Serverless for background jobs

Our first use case is running background jobs. We need to run jobs to send scheduled emails, sync with external APIs and run analysis on our data.

For background jobs, Node.js and FaaS make a lot of sense. This allows us to define a function that runs on a schedule or by demand. We use AWS, and can configure any function to run on a cron-like schedule initiated by AWS.

Let’s take an example: we’d like to send emails to customers every day at 10pm with a digest of what’s new in their account. We define a simple function:

const handler = (e, ctx) => {
const { customer } = e
const updates = await getUpdates({ customer })
await mailer.sendDigest({ updates }}
return { success: true }
}

Define a serverless.yml file:

func:
handler: index.handler
event:
daily 10pm

And that’s it. Our function will run every day at 10pm once deployed to AWS (a simple serverless deploy). Quick and easy.

Use case 2: Node.js + Serverless for API servers (hapi-front)

Our second use case is providing an API server serving the web app clients.

Building an API server on serverless sounds less intuitive. However, an API server can be seen as a simple function. It gets the HTTP request as input, processes it, and returns the HTTP response as a result.

What we do, is use a regular Node.js server (we use the hapi.js framework, which is similar to express, but we find it to better suit large projects) and package it into a Lambda function.

Instead of running this function on a schedule, we run it every time an HTTP request is sent to our server. We use an AWS API Gateway to listen for HTTP requests and forward them to our API server function which is wrapped by hapi-front, an open-source thin layer on top of hapi.js.

hapi-front is a thin layer that translates API Gateway events to hapi.js network requests, ahttps://github.com/toriihq/hapi-frontnd from hapi.js network responses to API Gateway responses.
hapi.js provides a feature for injecting requests which is mostly used for implementing unit and integration tests. We’ve hooked into that functionality and inject a network request whenever API Gateway provides an event. This allows us to run the server locally as usual without any change, while using hapi-front when deploying to AWS.

Node.js: serverless vs. actual servers

Node.js is based on an event loop running on the main thread with async operations running in the background. Meaning that while the Node.js server sends data to the client in the background, another event can start running on the main thread.

With serverless, we get one “server” per client request. Yes, you get a full Node.js server to handle a single request. When you do async operations, like reading from the database, the event loop might be empty and the server doesn’t accept any more requests from the customer. This seems like we’re losing Node’s advantage as being able to process multiple requests on a single server.

So if we get 500 parallel requests, we’ll have 500 function invocations where each one handles a single request. The scaling is done via the cloud provider’s capabilities and is not based on the ability of Node.js to process many requests in parallel.

Another difference, is that with a persistent “regular” server, we can first respond to the request and then handle it:

res.send(200)
await doSomethingThatTakesTime(options)

With serverless, once we return the response, the function stops running. To achieve the same effect, we’ll need to call a background Lambda to complete the work.

await lambda.invoke('doSomethingThatTakesTime', options)
res.send(200)

Myths about serverless

Serverless is only for playing around or “glueing” code paths. While serverless computing is great for bridging the work of other servers, it can definitely be used as the main infrastructure itself. Our product serves 100s of customers with no persistent server. The serverless offering by AWS and other cloud providers is in the front and it keeps improving all the time.

Serverless is slow because of cold-starts. A cold start is when a function is called and there is no serverless container up yet to handle the request. Setting up the container takes a bit of time, but next requests are instant as they would be on a regular server. In practice, cold-starts make up the minority of requests and do not cause a significant slowdown of the product.

Serverless costs more. On one hand, managed infrastructure like serverless costs more, but on the other, you only pay for what you use. If your service requires high availability and the option to scale, the costs may be equal. When considering the reduced devops time, serverless may be cheaper than running and maintaining servers.

Serverless can’t run locally. As serverless is a function, it is simple to run it locally. We just run the function. That’s basically all it takes, you don’t even need a framework as a simple node index.js is equivalent to running the function on AWS.

Limits: non-obvious benefits

Beyond the obvious benefits of serverless, such as reduced operation, pay as you go, scaling and high availability, there are additional benefits that result from the limitations.

The Lambda functions have limits: on the amount of data it can consume and respond to, on max run time, on max memory that can be allocated and more. While these boundaries may seem limiting, it actually drives us to write better and more robust code. Here are some examples:

  • Timeout limit → Time optimization. The functions are limited to 15 minutes (at the moment) and will stop when reaching the timeout. When we see functions reaching the timeout, we opt to inspect and optimize the code. We’ve even seen code reduced by a developer from 10 mins to 5 seconds(!) after examining a non-optimized npm dependency.
  • Timeout limit → Continuability. Some workloads need more time to run than the function allows. In this case, we made the code “continuable” where we can continue to work from the same place it stopped by running the function again. This virtually removes any limits on run time and has the added benefit of allowing us to continue on errors without losing what was already completed.
  • Memory limit → Resources optimization. When running into a memory issue, we optimize the code instead of instantly opting for larger machines. Instead of loading all the data into memory for a computation, we load the working set needed and release what’s already been computed.
  • Data limitBetter utilization of resources. There’s a limit of how much data the Lambda functions can manage and respond to (10MB at the moment). We hit a brick wall when users tried to upload large files or download them. The solution was to have the user work directly with the S3 service which stores files. AWS allows you to do this while keeping security by signing requests to S3. This allows the user to upload and download files faster and have the limits removed.

Optimizing for serverless

It is possible to optimize the system to provide a small package that loads fast, even on cold-starts. As we all know, npm dependencies are often quite large and packaging them can increase the size of the package that AWS needs to load when invoking a function.

Our first optimization was to make our functions scope smaller. Each function should handle one thing (and do it well) only requiring necessary dependencies. This can be done by packaging every function individually, without including dependencies used by other functions.

We use webpack to achieve this. Beyond bundling our code and allowing us to use the latest JavaScript features (by using babel), we also get the benefit of webpack analysis of which dependencies are actually imported into each function. This allows us to package each function individually, and while it takes more time on the build step, we get the benefit of faster loading functions.

Another optimization for the API server is actually having just one function to handle all endpoints. While an option exists to have each endpoint stand alone, a unified function to address all endpoints reduces the number of cold-starts and the users enjoy a faster experience.

Future of Node.js API servers on serverless

What would an API server framework look like if it was built entirely for serverless. A future looking framework could have:

  • Lazy loading of code. Similar to techniques used in a browser, it could load the code only when it needs it. This will result in reduced cold-start time and quicker responses.
  • Lightweight. No need for any networking code at all, it is just data passing through. It will handle routing, authentication and validation.
  • Background jobs. Support running background lambdas for long running tasks out of the box.

Conclusion

Without a doubt, we suggest every new project starts with serverless computing. You can opt out if limits are reached. Keeping the code de-coupled from the runtime environment allows for this with close to zero effort.

Starting out is simple and the ecosystem of tools for serverless helps with all common needs.


Node.js and zero servers was originally published in The Startup on Medium, where people are continuing the conversation by highlighting and responding to this story.


Viewing all articles
Browse latest Browse all 20

Latest Images

Trending Articles





Latest Images