Series

Project ICARUS

15 posts

01
We're Entering the AI Grand Prix
Palmer Luckey is putting $500K on the line for autonomous drone racing. We're in. Team Northlake Labs — two people and a Linux box — against university labs and aerospace companies. Here's why we're doing it and what the first three weeks look like.
02
Building an Autonomous Drone Racing AI — Part 1: The Setup
We're building an AI pilot for the AI Grand Prix 2026 — $500K, autonomous drone racing, no human pilots. Part 1 covers why we're doing this, how we chose our simulator, what our architecture looks like, and the first milestone we hit: 100% single-gate success rate at 14.8 m/s.
03
Teaching a Drone to Race with Reinforcement Learning
Project ICARUS: the full arc from a crashing quadrotor to a drone that clears a 3-gate curriculum at 100% — complete with architecture diagrams, training curves, and hard-won lessons about reward shaping.
03
Teaching a Drone to See
How do you navigate a drone racing course with nothing but a single camera and some math? No depth sensor, no lidar, no stereo vision — just raw pixels and the will to not crash. Here's how monocular RGB perception works for autonomous drone racing.
03
The Sim-to-Real Gap Nobody Talks About
Everyone knows sim-to-real transfer is hard. But the actual failure modes — observation space mismatches, physics shortcuts, domain randomization theater — don't get enough airtime. Here's what we've learned building a drone racing AI in PyBullet.
04
Teaching a Drone to Fly with PPO
How Project ICARUS uses Proximal Policy Optimization to teach a simulated quadrotor to navigate racing gates — what reward shaping actually looks like in code, what the training curves tell you, and what we learned when the drone flew through its first gate.
05
Training a Drone to Race: Week 1 Diary
The first week of Project ICARUS — from a drone that immediately fell to the floor, to one that navigates a 10-gate curriculum. What worked, what didn't, and what surprised me.
06
Training Drones to Race in Simulation
How we're teaching an AI to fly faster than humans, one simulated crash at a time.
07
When Your Drone Only Flies Straight — The Generalization Problem in RL
Project ICARUS hit 100% success on straight courses. Then we ran the preset tracks. 0%. This is the honest engineering story of what happened and why curriculum design is everything in RL.
08
Reward Engineering: Teaching a Drone to Race with Math
The hardest part of ICARUS isn't the physics or the policy network — it's telling the drone what 'good' means. A deep dive into reward shaping for multi-gate drone racing: why it's hard, what we built, and what 55% 10-gate completion from zero looks like.
09
Teaching a Drone to Race: Curriculum Learning in Practice
How ICARUS learned to fly through gates one at a time — and what a Python API misuse taught us about the difference between reward design and reward hacking.
10
Teaching an AI to See Racing Gates
The story of training a neural network to detect drone racing gates — 1,220 synthetic images, one precision breakthrough, and what happened when we added occlusions.
11
What AI Drone Racing Actually Looks Like
Project ICARUS hit 96.7% course completion at 5.8M training steps. Here's the honest technical story: curriculum learning, reward engineering, angular jerk at 1112, and what's keeping us from the DCL platform.
12
Curriculum Learning: Teaching AI to Crawl Before It Flies
How progressive difficulty scaling keeps ICARUS from burning out on impossible gates — and why it mirrors the way humans actually learn.
13
The Reward Normalization Trap
VecNorm looks like a free lunch for RL training stability. Until it silently erases the signal you care most about.