NodeSource brings C++ API to N|Solid!
NodeSource is excited to announce C++ API - Beta! π₯³ With this new API, you will be able to use all the features and power of N|Solid with your own C++ code.
In case you missed it, NodeSource also launched JS API which allows you to have all the metrics and functionality of the N|Solid console using your own JavaScript code programmatically.
The C++ API differs from our JS API because this new C++ API works at a lower level, is more performant, and it doesnβt block the event loop. Using a native C++ API will allow you to configure your code as preferred, create and pause the metrics and consume it whenever necessary, generate heap snapshots or CPU profiles and use all the N|Solid metrics with no overhead - eliminating the JavaScript level of extraction which will ultimately be faster and more performant.
In the following example, we present a simple native add-on that demonstrates the use of the C++ API. This add-on spawns a thread and from there it creates a repeating timer. In the first timer callback, it gathers the thread specific metrics from the main JS thread whereas in the second callback it takes a 5-second CPU profile. Finally the timer is closed and the thread gracefully exits. Notice the importance of running the C++ API from a non-JS thread to avoid having a performance hit.
#include <nsolid.h>
β
#include <assert.h>
#include <cmath> // for std::isnan()
β
uv_thread_t thread_;
uv_timer_t timer_;
unsigned int count_;
β
using node::nsolid::CpuProfiler;
using node::nsolid::ThreadMetrics;
using node::nsolid::NSolidErr;
β
static void got_thread_metrics(ThreadMetrics* ts, uint64_t thread_id) {
assert(thread_id == 0);
ThreadMetrics::MetricsStor stor;
assert(0 == ts->Get(&stor));
delete ts;
β
std::string metrics;
metrics += "{";
#define V(Type, CName, JSName, MType) \
metrics += "\"" #JSName "\":"; \
metrics += std::isnan(stor.CName) ? \
"null" : std::to_string(stor.CName); \
metrics += ",";
NSOLID_ENV_METRICS(V)
#undef V
metrics.pop_back();
metrics += "}";
β
fprintf(stderr, "got_thread_metrics: %s\n", metrics.c_str());
}
β
β
static void profiler_done(int status, std::string profile, uint64_t thread_id) {
assert(status == 0);
assert(thread_id == 0);
assert(profile.size() > 0);
fprintf(stderr, "profiler_done: %s\n", profile.c_str());
}
β
static void timer_cb(uv_timer_t* timer) {
switch (++count_) {
case 1:
{
// Take heap snapshot from main thread (thread_id = 0)
int thread_id = 0;
auto* ts = new ThreadMetrics(thread_id);
int r = ts->Update(got_thread_metrics, thread_id);
if (r != NSolidErr::NSOLID_E_SUCCESS) {
delete ts;
}
}
break;
case 2:
{
// Take cpu profile from main thread for 5 seconds
int thread_id = 0;
node::nsolid::CpuProfiler::TakeProfile(0, 5000, profiler_done, thread_id);
}
break;
β
case 3:
uv_close(reinterpret_cast<uv_handle_t*>(timer), nullptr);
break;
}
}
β
static void run(void*) {
uv_loop_t loop;
assert(0 == uv_loop_init(&loop));
// setup a repeating timer. In it's first iteration we will retrieve thread
// specific metrics and in the second iteration will take a cpu profile.
assert(0 == uv_timer_init(&loop, &timer_));
assert(0 == uv_timer_start(&timer_, timer_cb, 3000, 3000));
do {
assert(0 == uv_run(&loop, UV_RUN_DEFAULT));
} while (uv_loop_alive(&loop));
}
β
NODE_MODULE_INIT(/* exports, module, context */) {
// This module is to be used only from the main thread.
if (node::nsolid::ThreadId(context) != 0) {
return;
}
β
// This is important. In order to take full advantage of the C++ API, it
// should be run in a separate thread: never from a JS thread, whether it's
// the main thread or a worker thread. Running it from a JS thread of course
// it's possible, but beats its purpose and you'll notice a non-trivial
// performance hit.
int r = uv_thread_create(&thread_, run, nullptr);
assert(r == 0);
}
We are providing a prometheus agent as the reference implementation of an agent using the N|Solid C++ API. It allows a prometheus server to connect and pull metrics from N|Solid.
This means, you will be able to use other APMs and still use N|Solid and gain performance in the process as it cuts down the overhead created by regular agents. So if you use the C++ add-ons and love the N|Solid metrics, check it out!
'use strict';
const { Worker, isMainThread, parentPort } = require('worker_threads');
const prometheus = require('nsolid-prometheus');
if (!isMainThread) {
// Grab metrics from the worker threads
prometheus.start();
const buf = Buffer.alloc(20000);
const crypto = require('crypto');
parentPort.on('message', (msg) => {
if (msg === 'exit') {
process.exit(0);
}
// Perform some synchronous crypto operations
crypto.randomFillSync(buf).toString('hex');
const salt = Buffer.allocUnsafe(16);
const output = crypto.scryptSync(buf,
crypto.randomFillSync(salt),
4096).toString('hex');
// Random timeout [50ms, 400ms) simulating async ops.
setTimeout(() => {
parentPort.postMessage(output);
}, Math.floor(Math.random() * (400 - 50 + 1)) + 50);
});
return;
}
const NUM_THREADS = 4;
const workerPool = [];
const queuedTasks = [];
const config = {
interval: 1000,
listener: "localhost:8080",
gc: {
histogram: {
buckets: [ 1000, 1500, 2000, 2500, 3000 ]
}
},
http_server: {
histogram: {
buckets: [ 50, 150, 200, 250, 300 ]
}
}
};
// Initialize prometheus agent
prometheus.init(config);
for (let i = 0; i < NUM_THREADS; i++) {
workerPool.push(new Worker(__filename ));
}
const workers = workerPool.slice(0);
const http = require("http");
const host = 'localhost';
const port = 3002;
const reqHandler = (worker, res) => {
worker.postMessage('request');
worker.once('message', (data) => {
res.setHeader("Content-Type", "application/json");
res.writeHead(200);
res.end(JSON.stringify({ data }));
if (queuedTasks.lenght > 0) {
const task = queuedTasks.shift();
task(worker);
} else {
workerPool.push(worker);
}
});
};
const requestListener = (req, res) => {
if (workerPool.length > 0) {
const worker = workerPool.shift();
reqHandler(worker, res);
} else {
queuedTasks.push((worker) => reqHandler(worker, res));
}
};
const server = http.createServer(requestListener);
server.listen(port, host, () => {
console.log(`Server is running on http://${host}:${port}`);
// Start grabbing metrics from the main thread
prometheus.start();
// Exit after 5 minutes
setTimeout(() => {
prometheus.close();
server.close();
workers.forEach(w => w.postMessage('exit'));
}, 300000);
});
βWe use Prometheus to gain insight into the performance and behavior of individual Node.js processes. As opposed to statsd, which struggles with the high-cardinality dimensions required for per-instance metrics, and therefore can only really be used for aggregated metrics, Prometheus shines in this respect and allows us to dig into individual Node.js processes with ease.β Matt Olson - BigCommerce
You can also find the docs here for more information.
Download NSolid 4.3 here
You can download the latest version of N|Solid via http://accounts.nodesource.com or visit https://downloads.nodesource.com/ directly. To keep up to date with new product releases, new features, and all the latest with Node.js and NodeSource, follow us on twitter @nodesource.