Update on Experimental Features in Node.js - NodeSource

The NodeSource Blog

You have reached the beginning of time!

A Look at Experimental Features in Node.js

For Node.js to grow and evolve, the contributors need to keep researching, experimenting and adding new features. Most of the significant features that have been added to the core of Node.js, were first experimental.

stabilitiy experimental

For an experimental feature to become stable and be officially supported, it first needs to be extensively tested for a period of time, to make sure that the feature works well and adds value.

stability stable

A lot of experimental features get to the stable phase, like HTTP2, where it first landed as an experimental feature in Node v8.4.0 and then became stable in Node v10. But also, some other experimental features get deprecated.

stability deprecated

Some of the most relevant experimental features at the moment are:

Worker Threads

This module enables the use of threads that execute JS code in parallel.

To access it:

const worker = require('worker_threads');

Why is this useful? To get better performance on CPU-intensive JavaScript operations.

Node.js is by nature single threaded because it’s asynchronous event model. When a Node.js process is launched, it runs a single process with a single thread on a single core. Basically the code is not executed in parallel, only I/O operations (not CPU operations) are parallel because they are executed asynchronously.

Following this idea, the Worker Threads will not help much with I/O-intensive work because asynchronous I/O operations are more efficient than Workers can be. With this experimental feature, the contributors of Node.js are looking to improve the performance on CPU-intensive operations.

Regarding memory, (unlike child_process or cluster), worker_threads can share memory. They do so by transferring ArrayBuffer instances or sharing SharedArrayBuffer instances.

The following example creates a Worker thread for each parse() call.

    const {
      Worker, isMainThread, parentPort, workerData
    } = require('worker_threads');

    if (isMainThread) {
      module.exports = function parseJSAsync(script) {
        return new Promise((resolve, reject) => {
          const worker = new Worker(filename, {
            workerData: script
          });
          worker.on('message', resolve);
          worker.on('error', reject);
          worker.on('exit', (code) => {
            if (code !== 0)
              reject(new Error(`Worker stopped with exit code ${code}`));
          });
        });
      };
    } else {
      const { parse } = require('some-js-parsing-library');
      const script = workerData;
      parentPort.postMessage(parse(script));
    }

It requires:

  • Worker: the class that represents an independent JavaScript execution thread.
  • isMainThread: a boolean that is true if the code is not running inside of a Worker thread.
  • parentPort: the MessagePort allowing communication with the parent thread If this thread was spawned as a Worker.
  • workerData: An arbitrary JavaScript value that contains a clone of the data passed to this thread’s Worker constructor.

In actual practice for these kinds of tasks, use a pool of Workers instead. Otherwise, the overhead of creating Workers would likely exceed their benefit.

Performance Hooks

The Performance Timing API provides an implementation of the W3C Performance Timeline specification (the same Performance API as implemented in modern Web browsers).

To access it:

const performance = require('perf_hooks');

The purpose of this experimental feature is to support a collection of high-resolution performance metrics by providing methods to store and retrieve high-resolution performance metric data.

Why is this useful? Because what can be measured, can be improved. Accurately measuring performance characteristics of web applications is an important aspect of making web applications faster. This specification defines the necessary Performance Timeline primitives that enable web developers to access, instrument, and retrieve various performance metrics from the full lifecycle of a web application.

With the API It is possible to measure the duration of async operations, how long it takes to load dependencies among others.

The following example measures the time performance of an operation.

    const { PerformanceObserver, performance } = require('perf_hooks');

    const obs = new PerformanceObserver((items) => {
      console.log(items.getEntries()[0].duration);
      performance.clearMarks();
    });
    obs.observe({ entryTypes: ['measure'] });

    performance.mark('A');
    doSomeLongRunningProcess(() => {
      performance.mark('B');
      performance.measure('A to B', 'A', 'B');
    }); 

The example above imports performance and PerformanceObserver, and it measures the number of milliseconds elapsed since startMark, in this case, A and endMark B.

The Performance object creates the performance timeline, and the PerformanceObserver objects provide notifications when new PerformanceEntry instances have been added to the Performance Timeline. In other words, every time there is a new entry in the timeline, this object will create notifications for the user. However, it’s important to keep in mind that users should disconnect observers as soon as they are no longer needed because instances introduce their additional performance overhead, for that reason they should not be left subscribed to notifications indefinitely.

Diagnostic Report

Delivers a file of a JSON-formatted diagnostic summary, for development, test and production use, to capture and preserve information for problem determination.

It includes JavaScript and native stack traces, heap statistics, platform information, resource usage, etc.

To enable the diagnostic report use the flag: node --experimental-report.

With the report option enabled, diagnostic reports can be triggered on unhandled exceptions, fatal errors and user signals, in addition to triggering programmatically through API calls.

The following example is a piece of a report generated on an uncaught exception.

    {
      "header": {
        "event": "exception",
        "trigger": "Exception",
        "filename": "report.20181221.005011.8974.001.json",
        "dumpEventTime": "2018-12-21T00:50:11Z",
        "dumpEventTimeStamp": "1545371411331",
        "processId": 8974,
        "commandLine": [
          "/home/nodeuser/project/node/out/Release/node",
          "--experimental-report",
          "--diagnostic-report-uncaught-exception",
          "/home/nodeuser/project/node/test/report/test-exception.js",
          "child"
        ],
        "nodejsVersion": "v12.0.0-pre",
        "release": {
          "name": "node"
        },
      },
      "javascriptStack": {
        "message": "Error: *** test-exception.js: throwing uncaught Error",
        "stack": [
          "at myException (/home/nodeuser/project/node/test/report/test-exception.js:9:11)",
          "at Object.<anonymous> (/home/nodeuser/project/node/test/report/test-exception.js:12:3)",
          "at Module._compile (internal/modules/cjs/loader.js:718:30)",
          "at Object.Module._extensions..js (internal/modules/cjs/loader.js:729:10)",
          "at Module.load (internal/modules/cjs/loader.js:617:32)",
          "at tryModuleLoad (internal/modules/cjs/loader.js:560:12)",
          "at Function.Module._load (internal/modules/cjs/loader.js:552:3)",
          "at Function.Module.runMain (internal/modules/cjs/loader.js:771:12)",
          "at executeUserCode (internal/bootstrap/node.js:332:15)"
         ]
      },
    "javascriptHeap": {
      "totalMemory": 6127616,
      "totalCommittedMemory": 4357352,
      "usedMemory": 3221136,
      "availableMemory": 1521370240,
      "memoryLimit": 1526909922,
      "heapSpaces": {
        "read_only_space": {
          "memorySize": 524288,
          "committedMemory": 39208,
          "capacity": 515584,
          "used": 30504,
          "available": 485080
        },
       }
     },
    "resourceUsage": {
      "userCpuSeconds": 0.069595,
      "kernelCpuSeconds": 0.019163,
      "cpuConsumptionPercent": 0.000000,
      "maxRss": 18079744,
    },
    "environmentVariables": {
      "REMOTEHOST": "REMOVED",
      "MANPATH": "/opt/rh/devtoolset-3/root/usr/share/man:",
      "XDG_SESSION_ID": "66126",
      "HOSTNAME": "test_machine",
      "HOST": "test_machine",
      "TERM": "xterm-256color",
     },
    }

Find a full example report in the following link

Usage

A report can be triggered via an API call from a JavaScript application:

process.report.triggerReport();

It’s possible to specify the fileName of the report by passing it as an argument:

process.report.triggerReport(fileName.json);

And it can be also used to handle errors with the additional argument err. This allows the report to include the location of the original error as well as where it was handled.

    try {
      process.chdir('/non-existent-path');
    } catch (err) {
      process.report.triggerReport(err);
    }

To include both the error and the fileName, the err should be the second parameter.

    catch (err) {
      process.report.triggerReport(fileName, err);
    }

To use the report flags, instead of an API call from a JavaScript app, you can execute:

$ node --experimental-report --diagnostic-report-uncaught-exception \
      --diagnostic-report-on-signal --diagnostic-report-on-fatalerror app.js

Where:

  • --experimental-report enables the diagnostic report feature. In the absence of this flag, use of all other related options will result in an error.
  • --diagnostic-report-uncaught-exception enables report to be generated on un-caught exceptions. Useful when inspecting JavaScript stack in conjunction with native stack and other runtime environment data.
  • --diagnostic-report-on-signal enables report to be generated upon receiving the specified (or predefined) signal to the running Node.js process.

In conclusion, this experimental feature gives the user a JSON file with a more complete and extensive report about the diagnosis, errors, memory etc. of an application.

Policies

This experimental feature allows creating policies on loading code.

Policies are a security feature intended to allow guarantees about what code Node.js can load. Why is this useful? Because the use of policies assumes safe practices for the policy files, such as ensuring that the Node.js application can’t overwrite policy files by using file permissions.
A best practice would be to ensure that the policy manifest is read-only for the running Node.js application and that the running Node.js application cannot change the file in any way.

Usage

For enabling policies when loading modules, you can use the --experimental-policy flag.
Once this has been set, all modules must conform to a policy manifest file passed to the flag:

$ node --experimental-policy=policy.json app.js

The policy manifest will be used to enforce constraints on code loaded by Node.js.
The policies have two main features: Error Behavior (to throw an error when a policy check fails) and Integrity Checks (it will throw an error if any resource doesn’t match the integrity check listed in a specified policy manifest).

ECMAScript Modules

Node.js contains support for ES Modules based upon the Node.js Enhancement Proposal (EP) for ES Modules.

The purpose of the EP is to allow a common module syntax for Browser and Server and allow a standard set of context variables for Browser and Server. For example, in the browser, to import a file or module the syntax is “import”, in the server, it's “require”, and they are notable differences between them that need to be taken into account, like NODE_PATH, require.extensions and require.cache (they are not used by “import”).
Not all features of the EP are complete and will be landing as both VM support and implementation is ready. Error messages are still being polished.

Usage

For enabling features for loading ESM modules, you can use the --experimental-modules flag. Once this has been set, files ending with .mjs will be able to be loaded as ES Modules.

$ node --experimental-modules my-app.mjs

The features are divided into supported and unsupported.

Supported: Only the CLI argument for the main entry point to the program can be an entry point into an ESM graph. Dynamic import can also be used to create entry points into ESM graphs at runtime.

  • Import.meta: the import.meta metaproperty is an Object that contains the URL of the module.

Unsupported: require('./foo.mjs') because ES Modules have differing resolution and timing, use dynamic import.

Conclusion:

In conclusion, there are exciting projects and features the Node.js collaborators are working on, in this blog post we highlighted Worker Threads, Performance Hooks, Diagnostic Report, Policies and ECMAScript Modules. The experimental features could be in a stable Node.js version soon! So the organization would appreciate if you would like to collaborate or test some of the features.

The NodeSource platform offers a high-definition view of the performance, security and behavior of Node.js applications and functions.

Start for Free