
TDMStrobe
Precise timing and hardware synchronization for multi-rig systems, including triggering and phase control.
EdgeTrack is an open-source multi-view stereo tracking system built for robotics, VR, teleoperation, and spatial interaction. It combines RAW capture, precise timing, hardware synchronization, and host-side fusion to deliver stable and reproducible 3D motion data.
Instead of relying on opaque camera pipelines or AI-only estimation, EdgeTrack uses metric stereo geometry as its foundation. The system is modular, hardware-agnostic, and designed for deterministic real-world tracking workflows.

A demo video will be available on YouTube soon
Born out of frustration with the limitations of typical off-the-shelf systems, EdgeTrack introduces a tracking architecture designed for determinism, robustness, transparency, and cost efficiency.
The system is built around a RAW-first capture pipeline, hardware synchronization, and host-side fusion. By avoiding ISP-processed or compressed streams, timing and processing remain predictable and fully controllable.
On the host side, CoreFusion combines multiple rigs, rejects outliers, smooths motion, and processes ROI-based stereo data to produce stable outputs such as metric 3D keypoints and structured motion signals for higher software layers.
In multi-rig setups, phase-shifted timing allows different rigs to operate in offset capture windows, reducing interference and improving temporal coverage. A short concept example is shown below:
EdgeTrack is designed to remain transparent, modular, and predictable where many typical tracking stacks become difficult to control or extend.
| Other Systems | EdgeTrack |
|---|---|
| ISP-processed or compressed streams, limited control | RAW-first pipeline + deterministic control |
| Single-rig focus or unstable multi-camera setups | Multi-rig capable and fusion-ready |
| Often limited to 15–30 FPS in practical setups | Scales to high FPS with the right hardware and tuning |
| Timing jitter from USB, buffering, or encoding | Ethernet-first + hard sync + stable timing modes |
| Dense depth everywhere with heavy compute cost | ROI-first processing for efficiency; dense depth optional |
| Depth inferred mainly by AI can be inconsistent | Metric stereo geometry as foundation; AI assist optional |
| Closed ecosystems and vendor lock-in | Open source + modular components |
ISP-processed or compressed streams, limited control
RAW-first pipeline + deterministic control
Single-rig focus or unstable multi-camera setups
Multi-rig capable and fusion-ready
Often limited to 15–30 FPS in practical setups
Scales to high FPS with the right hardware and tuning
Timing jitter from USB, buffering, or encoding
Ethernet-first + hard sync + stable timing modes
Dense depth everywhere with heavy compute cost
ROI-first processing for efficiency; dense depth optional
Depth inferred mainly by AI can be inconsistent
Metric stereo geometry as foundation; AI assist optional
Closed ecosystems and vendor lock-in
Open source + modular components
Timing → Tracking → Fusion → Your Application

TDMStrobe
Precise timing and hardware synchronization for multi-rig systems, including triggering and phase control.

EdgeTrack
Edge-side stereo capture with RAW-first preprocessing, full pipeline control, and ROI-based stereo processing.

CoreFusion
Host-side fusion combining multiple stereo rigs, rejecting outliers, smoothing motion, and producing stable metric 3D outputs.

Your application
The final output can drive robotics, VR tools, research systems, teleoperation, or higher-level interaction software.

MotionCoder (optional)
A gesture interaction layer that maps stable motion signals to commands for 3D authoring, VR tools, and structured spatial workflows.

EdgeSense (optional)
AI assist for classification and confidence scoring, without replacing stereo geometry as the foundation of the system.
EdgeTrack is designed to stay hardware-agnostic and avoid vendor lock-in. The architecture can run on widely available compute platforms and can be adapted to different performance and cost targets.
Edge capture nodes can run on compact edge hardware such as Raspberry Pi-class systems or other ARM boards, while host-side fusion can run on standard Linux or workstation hardware. This makes the system flexible for prototyping, research, industrial setups, and cost-sensitive deployments.
Recommended reference configurations, hardware notes, and integration guidance are documented in the project docs.
EdgeTrack can be applied to a wide range of fields that benefit from stable, metric, multi-view motion tracking and transparent pipeline control. Its modular architecture enables many possible workflows, from creative tools and robotics to industrial systems, teleoperation, research, and spatial analysis.

3D Creation
Precision tracking for creative tools such as xtan and MotionCoder, enabling structured gesture interaction, tool control, and stable spatial input for professional 3D authoring workflows.

Teleoperation
Supports remote manipulation and operator guidance with stable multi-view tracking, helping connect human motion, tools, and machines in controlled real-time teleoperation environments.

Industrial Systems
Suitable for industrial workflows that require robust motion tracking, process monitoring, spatial control, or integration into machine-related environments with transparent and reproducible system behavior.

Robotics
Provides stable tracking data for robotics research, manipulation, calibration, perception experiments, and multi-sensor setups where deterministic timing and metric spatial information are important.

Motion Capture
Can be used for structured motion capture workflows where repeatability, camera synchronization, and stable 3D keypoints are more important than purely visual or entertainment-focused capture pipelines.

Sport
Motion tracking for sports analysis, training feedback, and biomechanics research. Multi-view stereo capture can help analyze movement, posture, and performance with stable spatial measurements.

Work Safety
Useful as a complementary sensing layer for work safety, operator awareness, protected tool zones, and motion monitoring in environments where additional spatial information can improve system awareness.

SLAM
Supports spatial mapping and localization workflows where synchronized multi-view capture, stable timing, and geometric consistency help build reliable maps and track motion in real-world environments.

Accessibility
Motion tracking can support assistive technologies, enabling new interaction methods for people with disabilities. Stable spatial tracking can help create accessible interfaces for communication, rehabilitation, and adaptive tools.
Join the EdgeTrack community and receive updates about development progress, releases, and hardware availability.
Access community and documentation updates with a simple email login.
No spam. Magic-link login. Unsubscribe anytime.
The EdgeTrack logo was developed from the same geometric design principles as xtan, but with a stronger emphasis on grayscale structure, spatial depth, and a hollow tetrahedral form.
Instead of being drawn manually, the shape was generated programmatically with JavaScript and Babylon.js to explore geometric surfaces, lighting, and spatial form.
This section is still evolving and will be refined step by step as the project grows. Thank you for your understanding and interest.