Running Your Node.js App With Systemd - Part 2 - NodeSource

The NodeSource Blog

You have reached the beginning of time!

Running Your Node.js App With Systemd - Part 2

Okay, you've read the previous blog post, have dutifully followed all the instructions, and you can start / stop / restart our hello_env.js application using systemctl. Congratulations, you are on your way to systemd mastery. That said, there are a few things we'd like to change about our setup to make it more production ready, which means we're going to have to dive a bit deeper into SysAdmin land.

In particular, the production machine you'll be running your application on likely has more than a single CPU core. Node.js is famously single threaded, so in order to fully utilize our server's hardware, a good first pass is to run as many Node.js processes as we have cores. For the purposes of this tutorial I'll assume your server has a total of four. We can accomplish our goal then by running four copies of hello_env.js on our server, but making each one listen to a different TCP port so they can all coexist peacefully.

Of course, you don't want your clients to have to know anything about how many processes you are running, or about multiple ports. They should just see a single HTTP endpoint that they need to connect with. Therefore, we need to accept all the incoming connections in a single place, and then load balance the requests across our pool of processes from there. Fortunately, the freely available (and completely awesome) Nginx does an outstanding job as a load balancer, so we'll configure it for this purpose a bit later.

Configuring systemd to Run Multiple Instances

As it turns out, the systemd authors assumed you might want to run more than one copy of something on a given server. For a given service foo, you'll generally want to create a foo.service file to tell systemd how to manage it. This is exactly what we did in the last blog post. However, if you instead create a file called foo@.service, you are telling systemd that you may want to run more than a single instance of foo. This sounds pretty much just like what we want, so let's rename our service file from before.

$ sudo mv /lib/systemd/system/hello_env.service /lib/systemd/system/hello_env@.service

Next comes the "interesting" or "neat" part of this modified systemd configuration. When you have a service file such as this that can be used to start multiple copies of the same thing, you additionally get to pass the service file a variable based on how you invoke the service with systemctl. Modify the contents of

/lib/systemd/system/hello_env@.service

to contain the following:

[Unit]
Description=hello_env.js - making your environment variables rad
Documentation=https://example.com
After=network.target

[Service]
Environment=NODE_PORT=%i
Type=simple
User=chl
ExecStart=/usr/bin/node /home/chl/hello_env.js
Restart=on-failure

[Install]
WantedBy=multi-user.target

The only difference from before is that now, we set:

Environment=NODE_PORT=%i

This lets us set the port that our application will listen on based on how we start it up. To start up four copies of hello_env.js, listening on ports ranging from 3001 to 3004, we can do the following:

$ sudo systemctl start hello_env@3001
$ sudo systemctl start hello_env@3002
$ sudo systemctl start hello_env@3003
$ sudo systemctl start hello_env@3004

Or, if you prefer a one-liner, the following should get the job done for you:

$ for port in $(seq 3001 3004); do sudo systemctl start hello_env@$port; done

All of the systemctl commands we saw before (start / stop / restart / enable / disable) will still work in the same way they did previously, you just have to include the port number after the "@" symbol when we start things up.

This is not a point to be glossed over. You are now starting up multiple versions of the exact same service using systemctl. Each of these is a unique entity that can be controlled and monitored independently of the others, despite the fact that they share a single, common configuration file. Therefore, if you want to start all four processes when your server boots up, you need to use systemctl enable on each of them:

$ sudo systemctl enable hello_env@3001
$ sudo systemctl enable hello_env@3002
$ sudo systemctl enable hello_env@3003
$ sudo systemctl enable hello_env@3004

There is no included tooling that will automatically control all of the related processes, but it's trivial to write a small script to do this if you need it. For example, here's a bash script we could use to stop everything:

#!/bin/bash -e

PORTS="3001 3002 3003 3004"

for port in ${PORTS}; do
  systemctl stop hello_env@${port}
done

exit 0

You could save this to a file called stop_hello_env, then make it executable and invoke it with:

$ chmod 755 stop_hello_env
$ sudo ./stop_hello_env

PLEASE NOTE that there is no requirement on having an integer or numeric value after the "@" symbol. We are just doing this as a trick to designate the port number we want to listen to since that's how our app works. We could just as easily have used a string to specify different config files if that was how our app worked. For example, if hello_env.js accepted a --config command line option to specify a config file, we could have created a hello_env@.service file like this:

[Unit]
Description=hello_env.js - making your environment variables rad
Documentation=https://example.com
After=network.target

[Service]
Type=simple
User=chl
ExecStart=/usr/bin/node /home/chl/hello_env.js --config /home/ubuntu/%i
Restart=on-failure

[Install]
WantedBy=multi-user.target

and then started our instances doing something like:

$ sudo systemctl start hello_env@config1
$ sudo systemctl start hello_env@config2
$ sudo systemctl start hello_env@config3
$ sudo systemctl start hello_env@config4

Assuming that we did in fact have files under /home/ubuntu named config1 through config4, we would achieve the same effect.


Go ahead and start your four processes up, and try visting the following URLs to make sure things are working:

http://11.22.33.44:3001
http://11.22.33.44:3002
http://11.22.33.44:3003
http://11.22.33.44:3004

again substituting the IP address of your server instead of 11.22.33.44. You should see very similar output on each, but the value for NODE_PORT should correctly reflect the port you are connecting to. Assuming things look good, it's on to the final step!

Configuring Nginx as a Load Balancer

First, let's install Nginx and remove any default configuration that it ships with. On Debian style systems (Debian, Ubuntu, and Mint are popular examples), you can do this with the following commands:

$ sudo apt-get update
$ sudo apt-get -y install nginx-full
$ sudo rm -fv /etc/nginx/sites-enabled/default

Next we'll create a load balancing configuration file. We have to do this as the root user, so assuming you want to use nano as your text editor, you can create the needed file with:

$ sudo nano /etc/nginx/sites-enabled/hello_env.conf

and put the following into it:

upstream hello_env {
    server 127.0.0.1:3001;
    server 127.0.0.1:3002;
    server 127.0.0.1:3003;
    server 127.0.0.1:3004;
}

server {
    listen 80 default_server;
    server_name _;

    location / {
        proxy_pass http://hello_env;
        proxy_set_header Host $host;
    }
}

Luckily for us, that's really all there is to it. This will make Nginx use its default load balancing scheme which is round-robin. There are other schemes available if you need something different.

Go ahead and restart Nginx with:

$ sudo systemctl restart nginx

Yes, systemd handles starting / stopping / restarting Nginx as well, using the same tools and semantics.

You should now be able to run the following command repeatedly:

$ curl -s http://11.22.33.44

and see the same sort of output you saw in your browser, but the NODE_PORT value should walk through the possible options 3001 - 3004 incrementally. If that's what you see, congrats, you're all done! We have four copies of our application running now, load balanced behind Nginx, and Nginx itself is listening on the default port 80 so our clients don't have to know or care about the details of the backend setup.

In Closing

There has probably never been a better or easier time to learn basic Linux system administration. Things such as Amazon's AWS EC2 service mean that you can fire up just about any kind of Linux you might want to, play around with it, and then just delete it when you are done. You can do this for very minimal costs, and you don't run the risk of breaking anything in production when you do.

Learning all there is to know about systemd is more than can reasonably covered in any blog post, but there is ample documentation online if you want to know more. I personally have found the "systemd for Administrators Blog Series", linked to from that page, a very valuable resource.

I hope you're had fun getting this app up and running!

The NodeSource platform offers a high-definition view of the performance, security and behavior of Node.js applications and functions.

Start for Free