Ciki Zeng
← Back to Blog
2026-04-13· 7 min readJumpOnionAI EngineeringDebugging

Day 2: 6.47 Seconds of Air Time (The Jump Lasted 0.7)

Day 1 ended with a Python script that could extract pose data from a figure skating video. Day 2 was supposed to be the victory lap — detect the actual jump, measure air time, and celebrate.

Instead, my first jump detection algorithm reported 6.47 seconds of air time on a triple Axel that lasted about 0.7 seconds. Not "a little off." Off by an order of magnitude.

Attempt #1: Track the Center of Mass

The approach seemed obvious. A skater jumps, their center of mass goes up, then comes back down. Track the vertical position of the body's centroid across frames, find the peak, measure the duration. Simple physics.

On lab footage — a person jumping straight up in a controlled environment — this works beautifully. Clean parabolic arc, easy peak detection, accurate timing.

On real rinkside footage, it was a disaster. The camera moves. The skater travels horizontally across the ice. Other skaters cross the frame. The centroid jumps around like a bug on a windshield, and the algorithm interprets every camera jitter as the skater leaving the ice.

Result: 6.47 seconds of air time. The algorithm was tracking camera shake, not the jump.

Attempt #2: Same Approach, Smaller Tweaks

The natural reaction: smooth the signal. Add a moving average filter. Increase the minimum peak prominence. Tweak the threshold.

This is where most debugging goes wrong — and where my SOP saved me. The second attempt produced 4.2 seconds of air time. Better, but still absurd. A real triple Axel takes about 0.6–0.8 seconds.

The problem wasn't the filter parameters. The problem was the entire approach. Centroid tracking fundamentally cannot work when the camera is moving and the skater is translating horizontally. No amount of parameter tuning fixes a wrong algorithm.

The SOP Kicked In

My workflow has a rule called Blindspot Interception: after two identical failures, force a root cause analysis. After 30 minutes on a dead-end approach, force a strategy switch. Don't tweak — rethink.

On the third attempt, instead of smoothing the centroid signal again, the system did something different. It asked the right question: what physical signal actually indicates a skater is in the air?

Without the SOP: I would have spent the entire day tuning filter parameters on a fundamentally broken approach. The centroid looks like a reasonable metric — it's just wrong for this specific domain. Without a forced strategy switch, reasonable-but-wrong can consume unlimited time.

Attempt #3: Track What Matters

The answer was embarrassingly simple once you know it: track the bottom-most point of the body (the skate blade or foot), not the center. When a skater is on the ice, their lowest point touches the surface. When they jump, that lowest point lifts off. The vertical position of the bottom point is the direct physical indicator of "in the air or not."

Combined with physics constraints — a figure skating jump can't exceed ~1.0 seconds of air time, and the skater must come back down — the noise from camera movement was filtered naturally. Camera shake doesn't create a sustained upward movement of the bottom point followed by a clean descent.

Result: 0.72 seconds of air time. Within the expected range for a real triple Axel from rinkside footage.

The Debugging Pattern

What made Day 2 instructive wasn't the fix — it was the failure mode. Here's the pattern:

  1. Attempt 1: Algorithm produces wildly wrong result (6.47s).
  2. Attempt 2:Same algorithm with parameter tweaks produces slightly less wrong result (4.2s). Developer thinks they're making progress.
  3. SOP trigger: Blindspot Interception fires — two failures on the same approach means the approach is wrong, not the parameters.
  4. Attempt 3: Completely different approach. Correct result on first try (0.72s).

The dangerous moment is Attempt 2. The numbers improved, which creates the illusion of progress. Without a forced strategy switch, there would have been an Attempt 3, 4, 5 — each slightly better, none correct, each consuming another hour.

Without SOP

Keep tweaking centroid tracking parameters. Each attempt takes 45–60 minutes. After 4–5 attempts and a full day, maybe discover the approach is fundamentally wrong. Maybe not.

With SOP

Blindspot Interception after 2 failures. Force root cause analysis. Switch to bottom-point tracking. Correct result within 30 minutes of the strategy switch.

What I Learned

Day 2 taught me the most important debugging principle in AI development:

  1. Improving wrong numbers feels like progress. 6.47s → 4.2s → 3.1s → ... still wrong. The trajectory of improvement tricks you into staying on a dead-end path.
  2. Domain knowledge beats algorithm sophistication. The fix wasn't a better filter or a smarter model — it was understanding what "being in the air" physically looks like in a video. The bottom point lifts off.
  3. Forced strategy switches save days. My AI partner would have happily kept optimizing centroid tracking forever. The rule that forced a rethink after two failures is what rescued the session.

The bottom-point tracking became the foundation of JumpOnion's detection pipeline. It survived six months of development and 1,000+ tests. Sometimes the right answer is embarrassingly simple — you just need a system that forces you to find it before you've wasted a week on the wrong one.

Next: Day 3 — the AI told a world champion his jump was wrong. The system caught it before any user saw it.

Want the methodology behind JumpOnion?

Templates, SOPs, and enforcement hooks — from $39.

See Pricing