Open Source · On-Device AI · Multi-GPU

Diagnose. Analyze.
Tune smarter, faster.

AI-powered diagnostic analyzer for drone flight logs and car engine telemetry. Auto-detects your data, visualizes discrepancies, and provides intelligent tuning recommendations.

Windows & Linux · NVIDIA · AMD · Intel · CPU Fallback

Download

Get TuneAI

Choose your platform. No install needed — just download and run.

Download for Linux Windows — Coming Soon

Linux: 422 MB · Standalone binary · No Python required

2
Analysis Modes
10+
Diagnostic Channels
4
GPU Backends
Application Preview

A modern dark-theme desktop app with real-time signal analysis, interactive plots, and an AI chat assistant.

AI Diagnostic Scanner — diagnostic_scanner.py
AI DIAGNOSTIC SCANNER
Drone Blackbox  |  Car Engine OBD-II  |  AI-Powered Analysis
ChannelMax ErrorMean ErrorDiscrepancyRegions
Roll0.38420.05218.2%12
Pitch0.41560.068714.1%18
Yaw0.19230.03124.7%6
AI Assistant
What's the worst axis?
Pitch has the highest max error (0.4156) and 14.1% discrepancy across 18 regions. This suggests P gain is too high, causing overshoot. Try reducing Pitch P by 5-10% and increasing D by 3-5% to dampen oscillation.
PID tips for roll?
Roll looks decent (8.2% discrepancy). The 12 regions are mostly at high stick inputs. Consider increasing Roll I by 5% for better tracking at sustained deflections.

Simulated UI — actual application uses tkinter with matplotlib plots and Phi-3 AI assistant

Features

Built for Power Users

Everything you need to diagnose, analyze, and tune — with AI assistance.

Auto-Detection

Scans column names to auto-identify drone BBL or car OBD-II data. Selects the right diagnostic profile with confidence scoring.

Multi-Channel Analysis

Normalizes input/output signal pairs, computes absolute error, and finds contiguous discrepancy regions above your threshold.

On-Device AI

Microsoft Phi-3-mini GGUF model (~2.5 GB). Runs 100% locally via llama-cpp-python. No cloud, no API keys, no data leaves your machine.

Multi-GPU

Auto-detects NVIDIA (CUDA), AMD (ROCm/Vulkan), Intel (Vulkan). Falls back to CPU automatically if no GPU is found.

Dark Theme UI

Modern dark palette with card panels, gradient header, styled buttons, and chat bubbles. DPI-aware on HiDPI displays.

Plugin Architecture

Abstract DiagnosticProfile base class. Add new data domains (marine, industrial) by implementing detect, compute, and prompt methods.

Discrepancy Detection

Configurable threshold slider. Real-time re-analysis when adjusted. Red overlay highlights problem regions on signal plots.

Cross-Platform

Pre-built binaries for Windows (.exe) and Linux. Build scripts with PyInstaller. Tested on Windows 11 and Ubuntu 22.04.

Analysis Modes

Two Domains. One Tool.

TuneAI auto-detects your data format and applies the correct diagnostic profile with confidence scoring.

Drone BBL Analysis

Parses Betaflight Blackbox .bbl files via the orangebox library, or pre-converted .csv files. Compares setpoint (pilot input) vs gyro (actual response) per axis.

Identifies oscillations, overshoots, and sluggish response patterns. AI suggests specific PID adjustments — P, I, D gains, filter settings, and propwash mitigation strategies.
Roll AxisPitch AxisYaw Axis Setpoint vs GyroPID TuningPropwash Oscillation DetectionMotor Output

Car Engine Analysis

Reads OBD-II CSV exports from Torque Pro, ScanMaster, or similar tools. Analyzes up to 7 diagnostic channels with domain-specific normalization.

Checks fuel trims (STFT/LTFT outside +/-10% is flagged), RPM stability, coolant temp anomalies, O2 sensor switching, timing advance for knock retard, and throttle-to-load correlation.
Throttle vs RPMSTFTLTFT Coolant TempO2 SensorTiming Advance Engine LoadMAF
Workflow

How It Works

From raw log file to actionable tuning advice in four steps.

Open a Log File

Click "Open Log File" and select a .bbl, .bfl, or .csv file. Supports multiple encodings (UTF-8, Latin-1, CP1252).

Auto-Detect & Analyze

Column-name matching identifies the data type. Signals are normalized, errors computed, and discrepancy regions extracted above your threshold.

Visualize Results

Multi-channel matplotlib plots overlay input vs output with red discrepancy highlights. Summary table shows max/mean error, std dev, and region counts.

Chat with AI

Ask natural language questions. The AI receives your computed stats as context and provides domain-specific recommendations referencing your data.

Under the Hood

Architecture

Modular design with pluggable profiles and GPU-accelerated AI inference.

GPU Detection

detect_gpu() probes hardware at launch. Checks nvidia-smi for NVIDIA, rocm-smi or WMI for AMD, WMI for Intel. Caches result. Returns vendor, driver version, VRAM, and backend type.

Diagnostic Profiles

DiagnosticProfile ABC with detect(), compute_channels(), get_ai_system_prompt(), and get_summary_text(). Two implementations: DroneProfile and CarEngineProfile.

AI Model Manager

AIModelManager handles download (~2.5 GB Phi-3-mini GGUF), GPU/CPU loading via llama-cpp-python with n_gpu_layers=-1 for full offload, and inference with 4096 token context.

Channel Builder

build_channel() normalizes signals (joint or independent), computes absolute error, finds contiguous discrepancy regions via threshold scan, and returns a stats dict.

Data Store

DiagnosticDataStore holds the DataFrame, channels, active profile, and threshold. build_ai_context() serializes everything into a prompt string for inference.

GUI Layer

App(tk.Tk) with custom widgets: CardFrame (bordered panels), StyledButton (canvas-drawn rounded buttons), StatusDot. Matplotlib embedded via FigureCanvasTkAgg.

AI Inference Pipeline

Load File

.bbl / .csv parsed
to DataFrame

Compute Channels

Normalize, error calc
discrepancy regions

Build Context

Profile prompt + stats
serialized for AI

Phi-3 Inference

On-device GGUF
GPU or CPU

Hardware

GPU Support Matrix

Auto-detects your GPU and selects the fastest available backend. No manual configuration.

GPUDetectionBackendNotes
NVIDIA GTX / RTXnvidia-smiCUDABest performance. Install with CUDA wheel for llama-cpp-python.
AMD Radeon RXrocm-smi / WMIROCm VulkanROCm on Linux, Vulkan on Windows. Vulkan SDK required.
Intel Arc / UHD / IrisWMIVulkanVulkan SDK required. Arc discrete GPUs recommended.
No GPU / UnsupportedCPUAlways works. Slower AI inference (~2-5x vs GPU).
Dependencies

Requirements

Minimal dependencies. Optional packages unlock AI features.

PackageVersionStatusPurpose
Python3.10+RequiredRuntime
numpy≥ 1.24RequiredNumerical analysis & signal processing
pandas≥ 2.0RequiredData loading, processing & CSV/BBL parsing
matplotlib≥ 3.7RequiredSignal plots with TkAgg backend
plotly≥ 5.15RequiredInteractive visualizations
orangebox≥ 0.4.0OptionalBetaflight Blackbox .bbl file parsing
llama-cpp-python≥ 0.2.0OptionalOn-device AI chat assistant (Phi-3 GGUF)
FAQ

Frequently Asked Questions

Common questions about TuneAI.

Does TuneAI send my data to the cloud?
No. All analysis runs locally on your machine. The AI model (Phi-3-mini) runs entirely on-device via llama-cpp-python. No data leaves your computer. The only network request is the one-time model download (~2.5 GB) on first use.
What file formats are supported?
Betaflight Blackbox logs (.bbl, .bfl) parsed via the orangebox library, and CSV files (.csv) with auto-detection of encoding (UTF-8, Latin-1, CP1252). OBD-II exports from Torque Pro, ScanMaster, and similar tools are supported.
Do I need a GPU?
No. TuneAI works on CPU-only systems. A GPU (NVIDIA, AMD, or Intel) accelerates AI inference by 2-5x but is entirely optional. The scanner and plotting features work without any AI or GPU.
Can I add support for new data types?
Yes. Implement the DiagnosticProfile abstract base class with four methods: detect() for confidence scoring, compute_channels() for analysis, get_ai_system_prompt() for AI context, and get_summary_text() for display. Register your profile in the PROFILES list.
What AI model does it use?
Microsoft Phi-3-mini-4k-instruct in GGUF format (Q4 quantization, ~2.5 GB). It's a compact but capable model that runs well on consumer hardware. The model is loaded via llama-cpp-python with full GPU layer offloading when a supported GPU is detected.
What was this tested on?
Developed and tested on an NVIDIA GeForce RTX 4080 SUPER (16 GB VRAM) running Windows 11, and deployed on Ubuntu 22.04 (DigitalOcean). Python 3.10+ is required.