N|Solid Launches AI Powered Profiling: Unleashing the Power of CPU and Heap Insights
Optimizing Node.js applications often involves sifting through complex performance data, trying to uncover hidden inefficiencies or bottlenecks. With N|Solid’s AI-powered profiling tools, you gain more than just data—you get actionable recommendations to diagnose and resolve issues with precision and speed. By leveraging predefined prompts and AI-powered analysis, the copilot not only highlights bottlenecks, inefficiencies, or anomalies but also provides clear guidance on how to address them. This streamlined approach ensures you can focus on delivering high-performance applications without the guesswork.
In this article, we’ll walk you through how to unlock these capabilities and supercharge your diagnostic process with N|Solid, the most powerful Node.js observability and diagnostics tool on the planet.
AI-Powered Recommendations
Within the N|Solid Console, in any view you can access our powerful AI Copilot. Selecting the blue icon in the lower-right corner of the interface, opens a panel with recommended actions tailored to your current view. N|Solid Copilot has been trained explicitly to support developers and devops to reduce the time it takes to solve performance and security issues within your Node.js applications. You can try N|solid including the Copilot feature for free today, signup now.
Here is how it works, when you open the Copilot you will see some predefined prompts, for example:
- "Generate a diagnostic report for the process xxxxxxx."
- "What are the latest performance results for this process?"
By using these predefined queries, the Copilot streamlines the process of uncovering issues reducing the time to resolve them.
AI-Driven Insights for Heap and CPU Profiles
Heap Sampling Analysis
In the Heap Sample and CPU Profile assets, you’ll now see an AI icon marked with a star pattern. Clicking this icon generates a comprehensive analysis of the specific heap sample, providing insights in different key areas:
In this example, we are receiving suggestions for three sections:
- Top Allocators: Identify the areas consuming the most memory.
- Potential Leaks: Detects memory leaks with actionable suggestions.
- Summary of Findings: Get a summary of important functions and what you should focus on improving.
Here’s an example of the Copilot output from a Heap Profile analysis:
Analysis of Heap Profile Data
Top Allocators
-
requestListener
(User Code)- Location:
/home/cesardiaz/NodeSource/examples/memory-leak-example/index.js
, Line 4 - Total Allocated: 305,232,016 bytes
- Allocations: This function is responsible for a significant portion of the memory allocation, accounting for approximately 94.6% of the total memory allocated in this profile. The allocation size is 305,232,016 bytes with a single allocation count, indicating a large memory usage in one go. This function should be examined closely for potential optimizations or memory management improvements.
- Location:
-
clearBuffer
(Node Internal)- Location:
node:internal/streams/writable
, Line 743 - Total Allocated: 2,621,840 bytes
- Allocations: This function is part of Node's internal streams and is not user code. It is not advisable to make changes here.
- Location:
-
endReadable
(Node Internal)- Location:
node:internal/streams/readable
, Line 1681 - Total Allocated: 2,097,520 bytes
- Allocations: This function is also part of Node's internal streams. No user intervention is recommended.
- Location:
-
_storeHeader
(Node Internal)- Location:
node:_http_outgoing
, Line 409 - Total Allocated: 2,097,216 bytes
- Allocations: This is part of Node's HTTP module. No changes should be made here.
- Location:
-
writeOrBuffer
(Node Internal)- Location:
node:internal/streams/writable
, Line 547 - Total Allocated: 1,573,008 bytes
- Allocations: Another internal function related to streams. No user changes are recommended.
- Location:
Potential Leaks
requestListener
is flagged as a potential leak due to its high memory allocation, representing more than 10% of the total memory during the analysis. This does not confirm a memory leak but suggests that this function should be reviewed for memory management practices.
Summary of Findings
- The
requestListener
function in your user code is the primary consumer of memory, accounting for a substantial portion of the total memory allocation. It is advisable to review this function for any potential memory inefficiencies or leaks. Consider checking for large data structures, unnecessary data retention, or inefficient algorithms that could be optimized. - The other functions listed as top allocators and potential leaks are part of Node's internal modules and do not require user intervention.
- Overall, the primary focus should be on optimizing the
requestListener
function to ensure efficient memory usage. If no issues are found, the data looks good, and no further action is needed.
How It Works: N|Solid preprocesses the data from each asset, extracting the most critical details. This data is then passed to our AI Service, N|Sentinel, which generates a detailed summary, pinpointing leaks and other crucial insights.
Why does it matter?
The Copilot’s analysis of heap profile data identifies key areas of memory consumption and potential inefficiencies, providing actionable insights to optimize your Node.js applications. For example, it highlights top allocators like requestListener
, which accounted for 94.6% of memory usage in the example, flagging it as a potential leak and suggesting it be reviewed for memory management improvements.
This AI-powered approach streamlines performance diagnostics, offering clear guidance on resolving issues like memory leaks or inefficient code paths. By simplifying complex profiling data into understandable and actionable insights, the Copilot helps developers save time, reduce debugging efforts, and maintain high-performance applications with ease.
CPU Profiles Analysis
You can use the AI service for CPU Profiles as well.
Clicking it produces a full analysis, including:
- High Self-Time Functions
- High-Frequency Functions
- Deep Call Stacks
- Significant Call Paths
Each analysis concludes with a concise summary, highlighting key areas of focus for optimization.
AI reports for cpu profiles can also be taken from the profile detail view as we can see in the following image:
Here’s an example of the Copilot output from a CPU profile analysis:
Analysis of CPU Profile
High Self-Time Functions
-
writev (unknown:4294967295:4294967295:writev)
- This function has the highest self-time, accounting for 28.11 seconds (23.35% of total self-time). It is part of the I/O operations, specifically related to writing data to sockets. The high self-time indicates that a significant amount of time is spent in I/O operations, which is expected in network-heavy applications.
-
requestListener (/home/cesardiaz/NodeSource/examples/memory-leak-example/index.js:4:24:requestListener)
- This user function has a self-time of 13.58 seconds (11.28% of total self-time) and a function time of 55.49 seconds. It is frequently called and is a critical part of the application's request handling. The high self-time suggests that this function is a potential area for optimization, especially if it involves complex logic or heavy computations.
-
nextTick (node:internal/process/task_queues:102:17:nextTick)
- This function has a self-time of 13.20 seconds (10.97% of total self-time). It is part of the Node.js event loop mechanism, indicating frequent scheduling of tasks. The interaction with user code, particularly
requestListener
, suggests that the application might be queuing many tasks, which could be optimized by reducing unnecessary asynchronous operations.
- This function has a self-time of 13.20 seconds (10.97% of total self-time). It is part of the Node.js event loop mechanism, indicating frequent scheduling of tasks. The interaction with user code, particularly
High Frequency Functions
-
emit (node:events:465:43:emit) and parserOnHeadersComplete (node:_http_common:70:32:parserOnHeadersComplete) are frequently called, indicating a high volume of event emissions and HTTP header parsing. This is typical in web server applications and suggests that the application is handling many requests.
-
requestListener (/home/cesardiaz/NodeSource/examples/memory-leak-example/index.js:4:24:requestListener) is also frequently called, reinforcing its importance in the application's performance profile.
Deep Call Stacks
- There are no deep call stacks present in the profile, which shows that the application has straightforward execution paths. This is a positive sign, indicating that the application does not have overly complex nested calls that could lead to performance issues.
Significant Paths
-
(root)
→parserOnHeadersComplete
→parserOnIncoming
→emit
→requestListener
- This path highlights the flow from HTTP header parsing to the user-defined request listener. The frequent invocation of
requestListener
suggests that optimizing this function could have a significant impact on overall performance.
- This path highlights the flow from HTTP header parsing to the user-defined request listener. The frequent invocation of
-
(root)
→parserOnHeadersComplete
→parserOnIncoming
→emit
→requestListener
→end
→Writable.uncork
→clearBuffer
→doWrite
→Socket._writev
→Socket._writeGeneric
→writevGeneric
→writev
- This path shows the complete flow from request handling to writing data to the socket. The presence of
requestListener
in this path indicates its role in the data flow and potential areas for optimization, especially in how data is processed and written.
- This path shows the complete flow from request handling to writing data to the socket. The presence of
Interaction with V8 Internal Functions
- The profile shows significant time spent in V8 internal functions like
(idle)
and(program)
, which is typical for applications with high I/O wait times. The interaction with these internals is primarily through the event loop and I/O operations, which are expected in a network server context.
Summary
The CPU profile indicates that the requestListener
function is a critical part of the application's performance, with both high self-time and frequency. Optimizing this function could lead to significant performance improvements. Consider reviewing the logic within requestListener
to identify any heavy computations or unnecessary asynchronous operations that could be streamlined. The absence of deep call stacks is a positive sign, suggesting that the application has a straightforward execution path. Overall, focusing on optimizing user code, particularly the requestListener
, could enhance the application's efficiency.
Why does it matter?
The Copilot’s CPU profile analysis identifies key performance bottlenecks and areas for optimization. It highlights high self-time functions, such as requestListener, which accounts for significant processing time, suggesting it as a critical focus for optimization. High-frequency functions like emit and parserOnHeadersComplete
reveal the application's workload, while call paths emphasize the importance of optimizing the data flow through requestListener.
This analysis matters because it provides actionable insights to streamline application performance. By focusing on optimizing user-defined functions like requestListener, reducing unnecessary asynchronous operations, and enhancing I/O efficiency, developers can significantly improve the responsiveness and efficiency of their Node.js applications.
Function-Specific Queries
The latest update also introduces the ability to ask specific questions about individual functions within CPU profiles. For example, by selecting a function, you can query:
- "What is the self-time and call stack for this function [xxxx]?"
The copilot responds with detailed insights, such as:
- Self-Time: How long the function takes to execute.
- Call Stack: A clear trace of where the function originated and its dependencies
Example: If a function’s self-time is negligible, the copilot will highlight the call stack and identify if the function is contributing to bottlenecks elsewhere.
Simplifying CPU Profile Analysis
Traditionally, CPU profiles can appear overwhelming, presenting raw data with little correlation. The new AI-powered feature transforms this experience, highlighting:
- Key Bottlenecks: Functions that slow down execution.
- Actionable Insights: Suggestions for optimization based on high-frequency or deep call stack functions.
How the AI Report Works The AI-powered analysis takes the raw CPU profile data and identifies correlations to pinpoint critical user functions. It evaluates these functions based on several criteria:
- Complex Execution Paths: Identifies functions with intricate interactions with other parts of the application.
- High Self-Times: Focuses on functions that take significant time to complete, potentially impacting performance.
- Deep Call Stacks: Flags functions with call stack depths greater than 40, which can indicate inefficient execution paths.
- High Frequency: Detects functions that are called frequently, which may contribute to application bottlenecks.
By synthesizing this data, the AI not only identifies problem areas but also analyzes how these functions interact with others, offering a holistic view of performance issues.
The copilot also proposes follow-up questions, such as:
- “Are there any deep call stacks in my profile? If so, show the function with the deepest call stack.”
- “Are there significant call paths related to high self-times? If so, show me the path for the function with the highest self time”
These guided questions empower users—regardless of expertise—to analyze CPU profiles with precision.
Why This Matters
This update is a game-changer for developers and performance engineers. The combination of global filters, AI-driven diagnostics, and targeted recommendations simplifies complex performance analysis. Even users unfamiliar with heap samples or CPU profiles can gain actionable insights to optimize their applications.
With these enhancements, N|Solid continues to set the standard for Node.js performance monitoring, empowering teams to work smarter, not harder.
Looking for advanced or enterprise support for Node.js? NodeSource is the premier provider, trusted by organizations like Delta, Visa, and Kaiser, to ensure robust support for their Node.js platforms. Partner with us to keep your applications secure, performant, and reliable.