How to Run Cross Platform Disk Test (CPDT) on Windows, macOS & Linux

How to Run Cross Platform Disk Test (CPDT) on Windows, macOS & LinuxCross Platform Disk Test (CPDT) is a lightweight, open-source utility for measuring disk performance consistently across Windows, macOS, and Linux. This guide explains what CPDT measures, how it works, how to install it on each platform, example command runs, how to interpret results, and troubleshooting tips.


What CPDT measures and why it’s useful

CPDT focuses on common, comparable storage metrics that matter for real-world use:

  • Sequential read/write throughput — large contiguous transfers (good for media files, backups).
  • Random read/write IOPS — small, scattered transfers (important for databases, OS responsiveness).
  • Latency — response time for individual operations (affects perceived speed).

Because CPDT uses the same test logic and file patterns across platforms, it’s useful for comparing drives, filesystems, or storage configurations (SATA vs NVMe, HDD vs SSD, local vs network).


Safety and prerequisites

  • CPDT performs destructive-like operations only on temporary test files; it does not modify partitions. Still, back up important data before running any disk benchmark.
  • Ensure you have enough free disk space for the test files (see recommended sizes below).
  • Run tests on the drive you want to measure (not the system/boot drive unless you know what you’re doing).
  • Close unrelated applications that might affect disk activity.

Installation

Windows
  1. Download the latest CPDT release (ZIP or EXE) from the project’s releases page.
  2. Extract to a folder (if ZIP).
  3. Optionally add the folder to PATH or run from the extracted directory.

Alternative: If CPDT provides a Windows installer, run it and accept prompts. You may need Administrator privileges for some advanced features.

macOS
  1. If a universal binary or .dmg is provided, download and open it; drag the CPDT binary to /usr/local/bin or Applications as instructed.
  2. Alternatively use Homebrew (if a formula exists):
    
    brew install cpdt 
  3. Make the binary executable if necessary:
    
    chmod +x /usr/local/bin/cpdt 
Linux
  1. Use your distribution package if available (rare), or download the statically linked binary or tarball from releases.
  2. Extract and move the binary to a directory in your PATH, e.g.:
    
    sudo mv cpdt /usr/local/bin/ sudo chmod +x /usr/local/bin/cpdt 
  3. For source builds:
    
    git clone https://example.com/cpdt.git cd cpdt make sudo make install 

Choose sizes relative to your drive and RAM:

  • Small test (cache-sensitive): 512 MB — highlights caching effects.
  • Medium (recommended general): 4 GB — balances runtime and representativeness.
  • Large (exhaust caches, measure sustained throughput): 32–64 GB for HDDs, 8–16 GB for SSDs.

If testing a partition or removable media, set size to no more than free space available.


Basic command examples

Note: replace cpdt with the actual binary name/command and /path/to/testfile with a location on the target drive.

  • Sequential write (create large file, measure write throughput):

    cpdt --test seq-write --file /path/to/testfile --size 4G 
  • Sequential read (read previously created file):

    cpdt --test seq-read --file /path/to/testfile --size 4G 
  • Random read/write (small block I/O to measure IOPS):

    cpdt --test rand-read --file /path/to/testfile --size 4G --block-size 4K --ops 100000 cpdt --test rand-write --file /path/to/testfile --size 4G --block-size 4K --ops 100000 
  • Combined mixed workload:

    cpdt --test mixed --file /path/to/testfile --size 8G --read-percent 70 --block-size 8K --duration 60 
  • Clear test file when done:

    rm /path/to/testfile 

Command-line flags vary by CPDT version; run cpdt –help or cpdt -h to see full options.


Interpreting results

CPDT typically reports:

  • Throughput (MB/s) for sequential tests — higher is better for large-file workloads.
  • IOPS (operations per second) for random tests — critical for databases and small-file workloads.
  • Average and percentile latencies (ms) — lower is better; watch 95th/99th percentiles for worst-case behavior.
  • CPU usage during test — high CPU may indicate software or encryption overhead.

Quick heuristics:

  • Consumer SSDs: 500+ MB/s sequential, 10k–100k+ IOPS (depending on workload).
  • NVMe SSDs: 1000+ MB/s up to several GB/s sequential, 100k–1M IOPS for small random.
  • HDDs: 50–250 MB/s sequential, <500 IOPS random.

Cross-platform considerations

  • Filesystem differences: NTFS, APFS, ext4, and others behave differently (journaling, caching) and affect results.
  • OS caching: Windows and macOS aggressively cache; use flags to bypass OS cache (if CPDT supports O_DIRECT or similar) for raw device measurement.
  • Permissions: macOS may require full disk access or permission to access external volumes.
  • Elevation: some operations (direct disk access) may need Administrator/root privileges.

Example test plan for comparison

  1. Choose test file size (4 GB).
  2. On each OS, place file on same type of drive/partition.
  3. Run: sequential write, sequential read, random read (4K), random write (4K), mixed ⁄30 for 60s.
  4. Repeat 3 times and take the median for each metric.
  5. Record OS, filesystem, driver, and whether tests used O_DIRECT / raw mode.

Troubleshooting

  • Tests too fast / results unrealistically high: likely measuring cache. Use larger file sizes or O_DIRECT.
  • Permission denied: run as Administrator/root or grant disk access on macOS.
  • High variance: close background apps, disable sleep, ensure stable power.
  • Test hangs or crashes: check disk health (SMART), run fewer concurrent ops, update CPDT to latest version.

Example output explained

Typical CPDT output might show lines like:

  • Seq-write: 1.2 GB/s, Avg latency 1.8 ms
  • Rand-4K-read: 120k IOPS, P99 latency 4.2 ms
    Interpret these by matching the metric to your workload (large-file copy vs database).

Final notes

  • Use consistent test parameters across platforms for fair comparisons.
  • If you need to benchmark mounted raw devices or encrypted volumes, document those layers — they influence results.
  • For repeated benchmarking, automate runs with a small script and log results to CSV for analysis.

If you want, I can generate platform-specific example scripts (PowerShell for Windows, Bash for macOS/Linux) tailored to your drive type and file sizes.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *