NeuroDraw | Gesture-controlled drawing experiment

Year: 2025
Status: Public
Skills: JS/HTML/CSS, machine learning, computer vision, human computer interaction
Github Repo Link
Original Paper Link

What it is

NeuroDraw is a browser-based sketching application controlled entirely by hand gestures. No mouse, no stylus: you simply use your fingers in front of your webcam. Pinching your thumb and index finger draws lines. A thumbs-up toggles between brush and eraser. Pinching with your pinky cycles through brush sizes. The interface overlays the skeleton and hand landmarks while you draw, giving real-time feedback on gesture detection.

Why I built it

I created NeuroDraw as a rapid exploration into future gesture-based interfaces and as a way to deepen my understanding of hand tracking using MediaPipe. I am fascinated by how intuitive, natural input could accelerate human robotic interaction, creative tools, and immersive systems. Through this experiment, I wanted to probe the practical challenges such as gesture ambiguity, responsiveness, and error resilience that come with on-device vision and embodied input.

How it works

NeuroDraw runs entirely in the browser using JavaScript, HTML, and CSS. It uses MediaPipe Hands to detect 21 landmarks per hand in real time. The webcam is accessed via the getUserMedia API, with the video feed mirrored to match natural hand motion. Gesture logic, implemented in JavaScript, monitors distances and relative orientations of key landmarks to interpret drawing and mode switch gestures. Rendering is done via the HTML5 Canvas 2D API, overlaying skeletons and drawing content on top. Drawing uses round caps and joins, and erasing is implemented via the “destination out” compositing mode. Sensitivity thresholds and gating heuristics reduce gesture conflicts and improve stability.

Previous
Previous

SnapSpendAI

Next
Next

Tact