Data Capture and Metrics Collection
Accurate data capture is central to OBUX’s value. During each benchmark run, the following data is collected:
Transaction Timings
For every action defined in the OBUX workload, a transaction time is recorded. A transaction consists of a start event and an end event. For example, when the Document application is instructed to open a file, the start event is the initiation of the open command. The end event is the moment the file is fully loaded, and the application becomes idle. The difference between these events, e.g., 2.3 seconds, is logged as the transaction time.
DNKLocalClient records each transaction with identifiers such as the application name, sub-transaction (e.g., B: Open file dialog ready), iteration number, and measured duration.
-
Category A timings are typically very short (fractions of a second), such as hovering over a menu item.
-
Category B timings are more extended and involve content loading or navigation.
-
Category S timings reflect CPU, disk, and application launch durations (e.g., 0.8 seconds for a CPU high-load single-core task, or 0.05 seconds for writing a small file).
System Performance Metrics
The LoadGen Insight component captures system-level performance data throughout the benchmark. These metrics are not used directly in the scoring model but provide important diagnostic insight. Collected metrics include:
- CPU utilization (per core and average)
- Memory usage
- Disk I/O throughput and queue depth
- Graphics metrics (e.g., frame or render times, when applicable)
- Network usage (particularly relevant during OBUX Web or video scenarios)
Metrics are sampled at fixed intervals (e.g., every five seconds). After the test, these data points can be correlated with transaction timings. For example, if CPU usage reaches 100% during a slow document-load iteration, this may indicate CPU contention as the cause of degraded performance.
System Information and export results
OBUX also collects basic environmental details, either entered by the test operator or automatically collected. This can include the number of vCPUs and amount of RAM assigned to a VM, CPU model, GPU presence, OS version, and other system characteristics. This information is logged alongside the benchmark results and can be used for comparison or filtering.
After data collection, all results are stored in a local CSV file. Each row contains fields such as: Iteration, Category, TransactionName, SubTransactionName, Time (seconds)
1, A, OBUX Document, A: Hover Open, 0.12
1, B, OBUX Document, B: Open file dialog ready, 1.98
...
This consistent structure allows the Analysis Engine to efficiently group, filter, and process timing data by category. If a transaction fails to complete, such as if an application freezes, the CSV may contain a huge timing value or no entry at all. OBUX handles these situations by either applying a penalty for missing data or ignoring the affected transaction, depending on configuration. For example, if “Page 5 found” is missing because the PDF application failed to load past page 4, that transaction may be treated as a worst-case value.
If result sharing is enabled, DNKLocalClient also submits the benchmark results to the OBUX community database, typically in JSON or line-protocol format to an InfluxDB or similar backend. All shared data is anonymized to remove personal or environment-specific identifiers. These aggregated datasets help build a community-wide performance baseline used for normalization and comparison. Importantly, local scoring is still computed immediately and does not require internet connectivity.
With the dataset captured, the next step is calculating the performance scores. The following section explains the OBUX scoring methodology, including task weighting and the formula for converting raw timings into normalized category scores and ratings.