Your GPU dashboard is lying to you

The standard GPU utilization metric doesn't measure what it claims to. Here's what
accurate measurement actually looks like, and an open-source tool to bring it to every AI deployment.

April 19th, 2026

15 min read

TL;DR

The standard GPU utilization metric, the one reported by nvidia-smi, nvtop, rocm-smi, Weights & Biases, Amazon CloudWatch, Google Cloud Monitoring, and Azure Monitor, does not measure how hard your GPU is actually working. It only tells you whether the GPU is doing anything at all. Real compute throughput can be as low as 1% while dashboards read 100%. That single misleading number drives enormous amounts of wasted spend, wasted energy, and unnecessary hardware purchases across the AI industry.


Systalyze is open-sourcing Utilyze, a free, production-ready monitoring and debugging tool that accurately shows how efficiently your GPUs are actually doing useful work, and how close you are to the realistic maximum for your specific workload. Utilyze runs alongside any AI workload in real time with negligible overhead. In production deployments, Utilyze revealed orders-of-magnitude performance headroom in settings that standard tools declared fully saturated.


This has real consequences. New hardware takes months to acquire, energy costs are climbing alongside AI's surging electricity demand, and every dollar spent on unnecessary GPUs is a dollar not spent on the models themselves. Every percentage point of real throughput recovered from existing hardware is money not spent, a server rack not built, and a kilowatt-hour not consumed. Accurate measurement is the foundation, and Systalyze is the optimization platform built on top of it, enabling you to close the gap between where your deployment is and where it could be.

The AI Industry Has a Measurement Problem

The world is on the brink of an AI compute crisis. The electricity demand of AI clusters are rising quickly, lead times for acquiring GPUs stretch into months, and NVIDIA H100 one-year rental contract pricing rose almost 40% from October 2025 to March 2026. Getting more hardware is slow, expensive, and for many organizations, simply not an option.

Such scarcity in AI compute puts “optimization” at the center of what every enterprise is trying to achieve. That is the core of what Systalyze does: diagnosing and optimizing AI systems to improve the end-to-end performance of AI workloads. But working across many production AI deployments, we kept encountering the same surprising reality: most teams had no idea how inefficiently their GPUs were actually running. They assumed high utilization because their dashboards said so. Before anyone can close a performance gap, they have to be able to see it. Accurate GPU utilization measurement is not just useful, it is the prerequisite for any meaningful optimization. And it turns out the measurement tool that most organizations depend on for that is wrong.

In particular, the GPU utilization metric, reported by nvidia-smi, nvtop, amd-smi, Weights & Biases (gpu.{i}.gpu), Amazon CloudWatch, Google Cloud Monitoring, and Azure Monitor, does not measure how hard your GPU is working. It measures whether your GPU is doing anything at all. If at least one kernel is executing throughout the sampling window, the metric can read 100%, regardless of whether the GPU is using a fraction of a percent of its actual compute capacity or saturating it.

This lack of insight into utilization drives bad decisions, like purchasing more GPUs under the belief that existing ones are at capacity, and makes it harder to identify where workloads can be optimized.

We’re introducing Utilyze (utlz), a free, open-source tool that fills this gap. Unlike existing monitoring tools, Utilyze measures how efficiently your GPU is actually doing useful work, not just whether it’s running, and shows you this live, without slowing down your workload. It also tells you the realistic ceiling for your specific hardware and model combination, so you know whether you’re close to maximum performance or leaving capacity on the table.

What nvidia-smi Is Actually Telling You

To understand why the standard metric falls short, it helps to look at how the number is actually computed. The mechanics are simple, which is part of the problem. The GPU samples a binary signal: is at least one kernel scheduled on the GPU right now? and averages it over a sampling window (typically 1 second). The reported percentage is the fraction of that window where the answer was “yes.” If one single kernel ran for 400ms of a 1-second window, nvidia-smi reports 40% total GPU utilization. If a kernel ran the entire window, even a single tiny kernel on one of the Streaming Multiprocessors (SMs), it reports 100%. For instance, an H100 GPU has 132 SMs, each containing 128 CUDA cores and 4 Tensor Cores: 17,424 execution units total. Essentially, nvidia-smi treats one busy CUDA core and 1000s of busy CUDA cores identically.

The metric was designed for an earlier era when GPUs were running graphics pipelines and knowing whether the GPU was idle or active was useful. A graphics workload either has frames to render or it doesn’t. That binary distinction made sense then. It has not been updated for AI workloads, where a model can run continuously on the GPU while using a small fraction of its actual compute capacity.

This limitation propagates through the entire monitoring stack. Weights & Biases, automatically logs GPU utilization as gpu.{i}.gpu in its system metrics, sourced directly from nvidia-smi. The same is true of the major cloud monitoring dashboards: Amazon CloudWatch’s GPU monitoring, for example, uses nvidia-smi as its data source, the metric is literally named nvidia_smi_utilization_gpu, and the equivalent GPU utilization surfaces in GCP Monitoring and Azure Monitor are built on the same underlying driver counter. The situation on AMD GPUs is the same: rocm-smi reports the identical “any kernel scheduled” metric as nvidia-smi,


“Cloud providers and hardware vendors surface this same misleading metric on their dashboards. When that number reads 100%, the natural conclusion is that you need more hardware. The incentives to correct this misimpression are, to put it diplomatically, complicated.”

— Manya Ghobadi, MIT Professor & CEO, Systalyze

The AI Industry Has a Measurement Problem

The world is on the brink of an AI compute crisis. The electricity demand of AI clusters are rising quickly, lead times for acquiring GPUs stretch into months, and NVIDIA H100 one-year rental contract pricing rose almost 40% from October 2025 to March 2026. Getting more hardware is slow, expensive, and for many organizations, simply not an option.

Such scarcity in AI compute puts “optimization” at the center of what every enterprise is trying to achieve. That is the core of what Systalyze does: diagnosing and optimizing AI systems to improve the end-to-end performance of AI workloads. But working across many production AI deployments, we kept encountering the same surprising reality: most teams had no idea how inefficiently their GPUs were actually running. They assumed high utilization because their dashboards said so. Before anyone can close a performance gap, they have to be able to see it. Accurate GPU utilization measurement is not just useful, it is the prerequisite for any meaningful optimization. And it turns out the measurement tool that most organizations depend on for that is wrong.

In particular, the GPU utilization metric, reported by nvidia-smi, nvtop, amd-smi, Weights & Biases (gpu.{i}.gpu), Amazon CloudWatch, Google Cloud Monitoring, and Azure Monitor, does not measure how hard your GPU is working. It measures whether your GPU is doing anything at all. If at least one kernel is executing throughout the sampling window, the metric can read 100%, regardless of whether the GPU is using a fraction of a percent of its actual compute capacity or saturating it.

This lack of insight into utilization drives bad decisions, like purchasing more GPUs under the belief that existing ones are at capacity, and makes it harder to identify where workloads can be optimized.

We’re introducing Utilyze (utlz), a free, open-source tool that fills this gap. Unlike existing monitoring tools, Utilyze measures how efficiently your GPU is actually doing useful work, not just whether it’s running, and shows you this live, without slowing down your workload. It also tells you the realistic ceiling for your specific hardware and model combination, so you know whether you’re close to maximum performance or leaving capacity on the table.

What nvidia-smi Is Actually Telling You

To understand why the standard metric falls short, it helps to look at how the number is actually computed. The mechanics are simple, which is part of the problem. The GPU samples a binary signal: is at least one kernel scheduled on the GPU right now? and averages it over a sampling window (typically 1 second). The reported percentage is the fraction of that window where the answer was “yes.” If one single kernel ran for 400ms of a 1-second window, nvidia-smi reports 40% total GPU utilization. If a kernel ran the entire window, even a single tiny kernel on one of the Streaming Multiprocessors (SMs), it reports 100%. For instance, an H100 GPU has 132 SMs, each containing 128 CUDA cores and 4 Tensor Cores: 17,424 execution units total. Essentially, nvidia-smi treats one busy CUDA core and 1000s of busy CUDA cores identically.

The metric was designed for an earlier era when GPUs were running graphics pipelines and knowing whether the GPU was idle or active was useful. A graphics workload either has frames to render or it doesn’t. That binary distinction made sense then. It has not been updated for AI workloads, where a model can run continuously on the GPU while using a small fraction of its actual compute capacity.

This limitation propagates through the entire monitoring stack. Weights & Biases, automatically logs GPU utilization as gpu.{i}.gpu in its system metrics, sourced directly from nvidia-smi. The same is true of the major cloud monitoring dashboards: Amazon CloudWatch’s GPU monitoring, for example, uses nvidia-smi as its data source, the metric is literally named nvidia_smi_utilization_gpu, and the equivalent GPU utilization surfaces in GCP Monitoring and Azure Monitor are built on the same underlying driver counter. The situation on AMD GPUs is the same: rocm-smi reports the identical “any kernel scheduled” metric as nvidia-smi,


“Cloud providers and hardware vendors surface this same misleading metric on their dashboards. When that number reads 100%, the natural conclusion is that you need more hardware. The incentives to correct this misimpression are, to put it diplomatically, complicated.”

— Manya Ghobadi, MIT Professor & CEO, Systalyze

The Evidence: Dashboards Say 100%. Reality Can be as Low as 1%.

The default tool used by many engineers today is nvtop, an open-source GPU monitor that supports both NVIDIA and AMD hardware. Under the hood, nvtop calls nvdia-smi for NVIDIA GPUs and rocm-smi for AMD GPUs, and it inherits the same broken metric. To demonstrate the severity of the situation, we run a series of square matrix multiplications on loop on an NVIDIA H100 GPU at three different sizes: N = 256, N = 1024, and N = 4096, where each operation multiplies two NxN matrices, and profile with both nvtop and Utilyze.

The Evidence: Dashboards Say 100%. Reality Can be as Low as 1%.

The default tool used by many engineers today is nvtop, an open-source GPU monitor that supports both NVIDIA and AMD hardware. Under the hood, nvtop calls nvdia-smi for NVIDIA GPUs and rocm-smi for AMD GPUs, and it inherits the same broken metric. To demonstrate the severity of the situation, we run a series of square matrix multiplications on loop on an NVIDIA H100 GPU at three different sizes: N = 256, N = 1024, and N = 4096, where each operation multiplies two NxN matrices, and profile with both nvtop and Utilyze.

nvtop (top row) reads 100% on all three workloads regardless of the size of the matrix multiplications. Utilyze (bottom row) tracks actual compute throughput, showing dramatic utilization variation for different matrix sizes.

As shown in the above figure, nvtop is invariant to workload intensity: all three matrix multiplication sizes show 100% in nvtop (top row: cyan line pinned at the ceiling). Utilyze (bottom row) shows compute throughput scaling with matrix size, from 2.5% at N=256, 41% at N=1024 and 88% at N=4096.

To validate the correctness of Utilyze, let’s calculate the true compute utilization directly: a matrix multiplication of two N×N matrices at TF32 precision performs 2·N³ floating-point operations. An NVIDIA H100's TF32 Tensor Core peak is 378 TFLOPS, hence, at N=256, 2·256³ ≈ 0.034 GFLOPs × 155,349 iterations/sec = 5.2 TFLOPS, or 1.4% of peak; at N=1024, 42% of peak; and at N=4096, 88% of peak. In comparison, the theoretical ground truth numbers are within 2% of the numbers Utilyze reported.

While this direct calculation is tractable for a simple compute operation like direct matrix multiplication, it becomes intractable for real-world AI workloads. Modern training, fine-tuning, and inference pipelines consist of heterogeneous operators (attention, normalization, communication, sparsity, control flow), dynamic shapes, and complex scheduling effects across the GPU. In such settings, deriving true utilization analytically from first principles is not practical. What is needed instead is a method that measures utilization directly at the hardware level.

Utilyze provides exactly this capability: direct measurement of true compute utilization via GPU hardware performance counters. Utilyze arrives at nearly identical values (within 2%) from the other direction. Instead of deriving utilization from FLOP counts, it samples hardware counters on the GPU directly. The two methods agree because they measure the same physical thing from different angles: arithmetic work done against arithmetic capacity available. This cross-validation confirms Utilyze’s hardware-counter approach is accurate. No other tool today delivers this level of accuracy in real time without incurring meaningful overhead.

DCGM-based Counters Aren’t Much Better 

Prior articles have pointed out this gap and suggested alternative metrics through NVIDIA's Data Center GPU Manager (DCGM), a toolkit that exposes richer GPU counters than nvidia-smi (see here and here). 

The most common proxy to GPU utilization is DCGM’s “SM Active.” It measures the ratio of SMs with at least one warp  scheduled over the total number of SMs. This metric is an improvement over nvdia-smi, because  at least it considers some compute activity  inside the GPU rather than treating the whole chip as a single on/off switch. But SM active, and other DCGM metrics, have the same shape of problem one level down: a warp being resident on an SM does not mean that SM is doing arithmetic. The warp could be moving data, waiting for data to arrive from memory, or running bookkeeping instructions the entire time, and SM Active would still read 100%. Utilyze is specifically built to answer the true GPU utilization question: what fraction of peak arithmetic throughput is the GPU actually delivering? No off-the-shelf tool, including DCGM, provides this continuously in production.

To see this in practice, we ran a memory-bound workload on an H200, similar in shape to a decode-heavy LLM inference step, with nvtop, DCGM, and Utilyze. Under this workload the actual arithmetic throughput is around 8% of the ceiling

nvtop: 100% SM Active (DCGM): 98% Utilyze: 8%

nvtop: 100% SM Active (DCGM): 98% Utilyze: 8%

nvtop: 100% SM Active (DCGM): 98% Utilyze: 8%

Only Utilyze gets it right. nvtop is wrong for the reason we already covered. SM Active which is wrong because the SMs really do have warps resident the whole time — except that those warps are waiting on memory rather than doing math, and SM Active cannot distinguish between a warp that is computing and a warp that is sitting idle waiting for data. If you simply rely on SM active to monitor the GPU utilization, you might assume that the GPU is fully saturated while it is actually just sitting idle.

DCGM reports other metrics, such as SM issue (how often are instructions being issued), SM occupancy (how full are the SMs of warps), and tensor core throughput. None of these metrics, independently or combined, show the full picture that Utilyze provides.

Introducing Utilyze, Open-Sourced by Systalyze

We built Utilyze as an open-source, GPU monitoring tool to report true GPU compute and GPU memory bandwidth utilization as a percentage of the hardware’s theoretical limit. Beyond raw utilization, Utilyze estimates the portion of the theoretical limit that is practically attainable under the current hardware, software stack, and AI workload as well. Utilyze operates in real time with near-zero overhead, making it suitable for production environments where continuous observability is required without perturbing performance. At Systalyze, we use it to monitor, benchmark, and validate our performance optimization techniques and we think everyone should use it.

To install

$ curl -fsSL https://systalyze.com/utilyze/install.sh | bash

Before describing how Utilyze works, let’s unpack why accurate GPU utilization is a technically difficult measurement problem. GPUs have two fundamentally different types of compute resources: CUDA cores for general floating-point math, and Tensor Cores that perform matrix multiplications. They also have multiple levels of memory: HBM (high bandwidth memory) sitting off-chip, L2 cache, shared memory inside each SM, and registers local to each thread. Each of these resources can be a bottleneck independently. A workload can be using its Tensor Cores at full capacity while memory bandwidth sits nearly idle, or vice versa. A single percentage cannot represent this two-dimensional reality. 

As a result, every AI operation on a GPU is constrained by two physical limits: how fast the math units can execute arithmetic (compute throughput), and how fast data can move between memory and the math units (memory bandwidth). Every kernel hits one of these limits first, and that determines its maximum possible performance. 

This brings us to the framework that actually captures GPU utilization accurately: the Speed-of-Light (SOL) model.  This model is a performance framework that measures how close a kernel gets to the GPU's theoretical hardware ceiling, reporting two key numbers: Compute SOL % (achieved FLOPs ÷ peak FLOPs) and Memory SOL % (achieved bandwidth ÷ peak bandwidth). It derives from the roofline model, where every kernel is bounded by either compute or memory, and the higher of the two SOL percentages identifies the binding constraint. 

Utilyze provides exactly that, with two headline numbers: Compute SOL % and Memory SOL %. Both are shown live. The numerator comes from direct measurement of each compute engine (e.g., Tensor Cores, FP32/FP64/INT32 pipelines) and each memory subsystem (e.g., HBM bandwidth, L2, L1) where NVIDIA exposes each as a percentage of that hardware unit's theoretical maximum. The denominator is the SOL itself, the hardware peak. Together, these give you an accurate, live picture of GPU utilization that no other tool provides. If the compute number is dominant, your workload is compute-bound. If the memory number is dominant, you're memory-bound, and optimizations should target data movement first.

But it doesn’t end here. Here's something important that raw SOL % doesn't tell you on its own: 100% is not a realistic target.

The theoretical hardware peak of 2,000 TFLOPS of compute, 3.4 TB/s of memory bandwidth on an H100, is a physical limit that no real AI workload can reach. Kernel launches have overhead. Data moves between levels of the memory hierarchy. Thread synchronization takes cycles. In multi-GPU setups, communication between GPUs consumes time that could otherwise be spent on computation. For Mixture-of-Experts models, routing tokens to different experts creates irregular memory access patterns that reduce effective throughput. None of these are signs of poor optimization, they're structural properties of real deployments.

Every deployment has a natural ceiling below 100% that reflects the specific combination of model architecture, hardware, parallelism strategy, and batch size. We call this ceiling the Attainable Compute SOL %, hereafter referred to as Attainable SOL %. The gap between your current SOL % and the Attainable SOL % is your optimization budget. The gap between the Attainable SOL % and 100% is the physics of your deployment; you can't close it by tuning.

For instance, if you're running a 120B-parameter inference setup at 30% Compute SOL % and the Attainable SOL % for that model on that hardware is 35%, you're close to the limit. If the Attainable SOL % is 65% and you're at 30%, you have 35 percentage points of recoverable performance, and the right move is optimization, not procurement.

Why Is Utilyze Different?

Performance engineers often rely on two main tools to debug performance problems of AI workloads. First is Nsight Compute (ncu), a kernel-level profiler that reports detailed compute and memory throughput metrics, such as what fraction of the Tensor Core's theoretical throughput was actually achieved, what fraction of the memory bus was saturated, and where the bottleneck lies. The second tool is Nsight Systems (nsys), a timeline tool that records when kernels ran and how they interacted.

Both tools are built for offline analysis rather than a real-time dashboard. ncu gets its detail by "replaying" each kernel, running it many times with different counters selected, then stitching the results together. The result is valuable, but its overhead causes the workloads to run 10× to 100× slower than normal, which rules it out for live traffic. nsys avoids the slowdown but doesn't report throughput metrics at all, it answers "what happened" rather than "how efficiently."

The practical consequence: seasoned engineers who regularly reach for ncu (or its AMD equivalent,Omniperf) are using them for offline, per-kernel debugging and not to watch live traffic.

To address this challenge, Uytilyze cycles through GPU performance counters across time windows using NVIDIA's Nsight Perf SDK. Rather than replaying kernels, Utilyze takes a rolling sample across multiple windows and aggregates the result. As a result, the overhead is negligible and the measurement is continuous. You can run Utilyze alongside any production AI workload and get meaningful data in real time.

Benchmarking Utilyze 

The following are a few examples demonstrating how to leverage Utilyze to identify performance bottlenecks in real AI workloads.

Case 1: Prefill-heavy LLM inference

Let’s start with an inference workload: a Llama-3.1-8B, model running with vLLM 0.19 on 2xH200 GPUs. We first use a prefill-heavy workload with Input Sequence Length (ISL) of 8192, Output Sequence Length (OSL) of 64, and concurrency of 20. The following figure shows the output of Utilyze as this workload runs.

Utilyze shows that these GPUs are running at around 45% of their theoretical maximum, according to the Compute SOL % metric for this workload. Note that the Memory SOL % metric is lower than the Compute SOL, indicating that this workload and model is not memory-bandwidth bound; rather it is compute-bound. This is useful when comparing to decode-heavy inference workloads, which are often memory-bound. Utilyze has estimated that the upper bound compute utilization, or Attainable SOL %, is 89%. This number is model, GPU, and workload specific – there are inherent properties of certain models and workloads that cause their Attainable SOL % to vary. The difference between Attainable SOL % and Compute SOL % indicates that the GPU is currently underutilized.

Let’s now compare this to nvtop

nvtop's utilization sits at 100% the entire time. Reading this metric as a measure of GPU utilization, would provide misleading information that the GPU is fully utilized and no optimization can be done. Utilyze tells us this isn’t the case.

Now let’s apply Systalyze’s optimizations to this model and run the same benchmark:

The figure above shows that the new Compute SOL % line reaches the Attainable SOL %, meaning we have pushed the GPU nearly as far as possible for this model. The throughput numbers match this increase in utilization. The total token throughput before Systalyze’s optimization is 52,298 tokens/s, with the optimizations the throughput reaches 73,903 tokens/s, a 40% increase.

Case 2: Decode LLM inference

Interpreting Utilyze’s GPU utilization numbers in decode-heavy inference requires a greater understanding of the underlying mechanics. We’ll walk through a number of different scenarios and explain how Utilyze helps understand what’s actually happening inside the GPU.

Let’s start with the same model, unoptimized, with a decode-heavy workload (ISL = 1024, OSL = 4096, concurrency = 1): 

The above figure shows that the Memory SOL % is significantly higher than the Compute SOL %, which indicates that this workload is memory-bandwidth bound. Decode-heavy LLM workloads are often memory-bandwidth bound, not compute bound (see here) . This is because for each batch of tokens decoded, the entire model weights and the KV cache of each user’s queries need to be moved from HBM to the compute units of the GPU.

Let’s run the same workload, but with a higher concurrency (ISL = 1024, OSL = 4096, concurrency = 32): 

At higher concurrency, both the Memory SOL % and Compute SOL % report higher values. The Compute SOL % is higher due to the larger batch size: for each batch of tokens, we only have to read the model weights from memory once, which results in more compute work per batch. The Memory SOL % reports higher values because the GPUs are reading more information from the KV cache in total. The Memory SOL % increases over the course of the benchmark since later tokens have a larger KV cache to read from memory when performing a decode step.

Case 3: LLM Fine-Tuning

Let us now fine-tune our Llama-3.1-8B model with LoRA on two A100 80GB GPUs, using default framework settings. LoRA (Low-Rank Adaptation) is a widely used parameter-efficient fine-tuning technique: rather than updating all model weights, it inserts small trainable adapter matrices at each transformer layer while keeping the base model frozen. The training loop alternates between a forward pass through the frozen model, a backward pass to compute gradients for the adapter layers, and an optimizer step to update only the adapter parameters. Utilyze reports a Compute SOL % of 13–16% throughout substantially below the hardware’s theoretical maximum. nvidia-smi, as in every case we have examined, reads 100% throughout.

The low Compute SOL % is characteristic of LoRA fine-tuning under default settings, and understanding why requires looking at the arithmetic intensity of the operations involved. The dominant cost during the forward and backward passes is streaming the frozen base model weights through HBM on every training step. Those reads are large and sequential, which is efficient for memory bandwidth, but they produce relatively little arithmetic work per byte moved, placing this workload firmly in the memory-bound regime. Meanwhile, the LoRA adapter layers themselves are small: with a typical rank of 8 to 64, the matrix multiplications they introduce have problem sizes far too small to saturate the Tensor Cores. The result is that the GPU is dispatching kernels continuously throughout training,but the Tensor Cores are underutilized for much of that time, waiting on data rather than performing arithmetic. This is the same fundamental pattern seen in the memory-bound decode-heavy inference case: the GPU appears saturated from the outside, while the compute units sit largely idle inside.

The figure below shows the Utilyze output for this workload before and after applying Systalyze’s optimizations. In the baseline run, Compute SOL % sits steadily between 13% and 16%. Applying Systalyze’s optimizations, brings the Compute SOL % to 50–78%. This represents a 3–5× improvement in actual GPU compute throughput, reflected directly in training step time. The underlying compute capacity was always there. What was missing was the measurement to make it visible, and the tooling to act on it.

LLM fine-tuning: from 13% to 97% true utilization. Baseline (left) shows ~13–16% Compute SOL. Optimized (right) approaches the Attainable SOL % for this configuration. The underlying compute capacity was always there — the deployment strategy was leaving it idle.

Optimization as a Discipline, Not a Reaction

At Systalyze, we've observed hundreds of production deployments, and have noticed that optimization is never a one-time fix. Models get updated. Traffic patterns change. A batch size tuned for last month's request volume may be wrong today. A deployment optimized on A100s behaves differently on H100s. New parallelism strategies emerge; new quantization formats ship; kernel libraries evolve. The gap between your current SOL % and the Attainable SOL % doesn't stay constant, it has to be tracked continuously.

Continuous, disciplined measurement of true GPU utilization is what separates a deployment that compounds gains over time from one that quietly regresses. Until now, that kind of continuous measurement hasn't existed as an off-the-shelf capability, no existing open source tool provides a live, production-safe view of Compute SOL % and Memory SOL % for running workloads. Utilyze brings that capability to every team. Systalyze is the layer that uses it to actually close the gap: diagnosing where a deployment sits relative to its Attainable SOL %, choosing the right optimizations, and validating each one against the measured impact.


"The gap isn't awareness — engineers who write CUDA kernels know what accurate utilization looks like. The gap is tooling. There has never been a way to see true GPU efficiency continuously, in production, without slowing down the workload."


— Manya Ghobadi, MIT Professor & CEO, Systalyze

From Measurement to Performance: Systalyze

Utilyze shows you where you are. Systalyze closes the gap. Our platform uses the same SOL measurement infrastructure to automatically identify which optimization technique (e.g., CUDA graph compilation, rewriting efficient kernels, parallelism strategy selection, hyper-parameter tuning, kernel fusion, zero-copy, kernel-bypass, efficient job orchestration, and more) will move your deployment toward its Attainable SOL %. Each optimization is validated by its measured SOL impact.

Across deployments ranging from sub-billion-parameter inference models to trillion-parameter frontier models, on Cloud or on-premises: default configurations consistently leave 2–10× performance on the table. The right combination of optimizations, guided by accurate measurement, recovers most of it.

What We're Asking From the Community

Utilyze is a free, open-source project with Apache 2.0 license.

Run Utilyze on your workloads. Share your numbers. Tell us what you find, especially surprising gaps between what other dashboards report and what Utilyze measures. The more data points the community contributes, the better we can calibrate the Attainable SOL % across different model architectures, hardware generations, and deployment configurations.

To share findings, open a GitHub Discussion in the Utilyze repository with your model, hardware, baseline SOL %, and any optimizations you've tried. We'll be actively monitoring and responding. For deeper collaboration or enterprise deployments, reach out at utilyze@systalyze.com.

The initial release targets NVIDIA hardware. AMD support is on our roadmap, if you're running MI300X or MI325X and want to collaborate, reach out through the channels below.

Get Utilyze on GitHub Learn about Systalyze

Optimization as a Discipline, Not a Reaction

At Systalyze, we've observed hundreds of production deployments, and have noticed that optimization is never a one-time fix. Models get updated. Traffic patterns change. A batch size tuned for last month's request volume may be wrong today. A deployment optimized on A100s behaves differently on H100s. New parallelism strategies emerge; new quantization formats ship; kernel libraries evolve. The gap between your current SOL % and the Attainable SOL % doesn't stay constant, it has to be tracked continuously.

Continuous, disciplined measurement of true GPU utilization is what separates a deployment that compounds gains over time from one that quietly regresses. Until now, that kind of continuous measurement hasn't existed as an off-the-shelf capability, no existing open source tool provides a live, production-safe view of Compute SOL % and Memory SOL % for running workloads. Utilyze brings that capability to every team. Systalyze is the layer that uses it to actually close the gap: diagnosing where a deployment sits relative to its Attainable SOL %, choosing the right optimizations, and validating each one against the measured impact.


"The gap isn't awareness — engineers who write CUDA kernels know what accurate utilization looks like. The gap is tooling. There has never been a way to see true GPU efficiency continuously, in production, without slowing down the workload."


— Manya Ghobadi, MIT Professor & CEO, Systalyze

From Measurement to Performance: Systalyze

Utilyze shows you where you are. Systalyze closes the gap. Our platform uses the same SOL measurement infrastructure to automatically identify which optimization technique (e.g., CUDA graph compilation, rewriting efficient kernels, parallelism strategy selection, hyper-parameter tuning, kernel fusion, zero-copy, kernel-bypass, efficient job orchestration, and more) will move your deployment toward its Attainable SOL %. Each optimization is validated by its measured SOL impact.

Across deployments ranging from sub-billion-parameter inference models to trillion-parameter frontier models, on Cloud or on-premises: default configurations consistently leave 2–10× performance on the table. The right combination of optimizations, guided by accurate measurement, recovers most of it.

What We're Asking From the Community

Utilyze is a free, open-source project with Apache 2.0 license.

Run Utilyze on your workloads. Share your numbers. Tell us what you find, especially surprising gaps between what other dashboards report and what Utilyze measures. The more data points the community contributes, the better we can calibrate the Attainable SOL % across different model architectures, hardware generations, and deployment configurations.

To share findings, open a GitHub Discussion in the Utilyze repository with your model, hardware, baseline SOL %, and any optimizations you've tried. We'll be actively monitoring and responding. For deeper collaboration or enterprise deployments, reach out at utilyze@systalyze.com.

The initial release targets NVIDIA hardware. AMD support is on our roadmap, if you're running MI300X or MI325X and want to collaborate, reach out through the channels below.

Get Utilyze on GitHub Learn about Systalyze

About Systalyze

Systalyze is an MIT spinout building AI deployment and optimization software that enables enterprises to run training, fine-tuning, inference, and agentic AI workflows with significantly improved efficiency and predictability. The platform delivers substantial gains in performance and cost efficiency while maintaining full data privacy across on-premises, hybrid, and multi-cloud environments. Systalyze is designed to make production AI systems scalable and economically efficient. Utilyze, the open-source GPU monitoring tool described in this article, serves as the measurement foundation of the platform and is freely available.

© 2026 Systalyze. All rights reserved.

© 2026 Systalyze. All rights reserved.