Builds autonomous robots that perform lab tasks in microgravity without human crew.
Using autonomous robotic task execution with adaptive controllers, 6-DOF pose estimation for microgravity manipulation, and sim-to-real transfer learning.

|
Space Robotics
|
YC W26

Last Updated:
March 19, 2026

Builds autonomous robotic systems that perform laboratory and manufacturing tasks in microgravity, removing the human crew bottleneck for scalable in-orbit research and production.
Autonomous lab robots for microgravity: pipetting, sample preparation, plate handling, reagent mixing. Near-term ground and on-orbit demonstrations. Targeting commercial research and manufacturing customers.
RL-based adaptive controllers and computer vision for microgravity manipulation. Partnerships with commercial space stations (Axiom, Vast) likely. SBIR/STTR proposals for DoD on-orbit logistics possible.
<p>AI-driven robotic arms autonomously execute complex laboratory protocols—pipetting, sample prep, and reagent mixing—in microgravity without human intervention.</p>
A robot in space runs science experiments around the clock so astronauts don't have to.
General Astronautics deploys modular robotic arms equipped with reinforcement-learning-based controllers that learn to manipulate laboratory instruments (micropipettes, well plates, centrifuge rotors) in microgravity. The system ingests a digital protocol description, decomposes it into a sequence of manipulation primitives, and uses sim-to-real transfer to execute each step with sub-millimeter precision despite the absence of gravity-assisted settling. Onboard computer vision continuously verifies liquid volumes, sample positions, and equipment states, while an anomaly-detection model flags deviations and triggers autonomous recovery routines. This eliminates the dependency on scarce astronaut crew time—currently the single largest bottleneck for in-orbit research—and allows commercial customers to run hundreds of experiments per week on autonomous schedules.
It's like replacing a Michelin-star chef who can only cook one meal at a time with a tireless robot line cook that follows every recipe perfectly, 24/7, while floating.
<p>Deep-learning pose estimation enables robotic arms to identify, track, and safely capture tumbling or uncooperative objects in orbit for servicing or debris removal.</p>
The robot figures out exactly how a spinning piece of space junk is tumbling so it can grab it without crashing.
For satellite servicing and debris mitigation missions, General Astronautics develops a vision-based perception pipeline that estimates the full 6-degree-of-freedom pose of non-cooperative targets—objects with no fiducial markers, transponders, or cooperative interfaces. The system fuses monocular and stereo camera feeds through a deep-learning backbone (combining convolutional and transformer architectures) trained on synthetic datasets of thousands of satellite geometries rendered in photorealistic orbital lighting conditions. A temporal filtering layer (learned Kalman variant) smooths predictions across frames to handle rapid tumble rates and variable illumination. The resulting pose stream feeds directly into the motion-planning stack, enabling the robotic arm to compute intercept trajectories and compliant grasp strategies in real time. This capability is critical for the emerging on-orbit servicing market, where most legacy satellites were never designed to be grabbed.
It's like teaching a goalkeeper to catch a spinning, oddly shaped ball they've never seen before, in the dark, with a strobe light—except the ball is a $500 million satellite.
<p>Reinforcement learning agents trained in high-fidelity microgravity simulators transfer directly to physical robotic arms, enabling adaptive manipulation without costly on-orbit trial-and-error.</p>
Robots practice thousands of times in a virtual zero-gravity lab so they get it right the first time in actual space.
Training robotic manipulation policies in actual microgravity is prohibitively expensive and slow—every failed grasp wastes precious mission time and risks equipment damage. General Astronautics addresses this by building a high-fidelity digital twin of their robotic platform and orbital laboratory environment in physics simulators (MuJoCo/Isaac Sim), with accurate microgravity dynamics including fluid surface tension, capillary effects on liquids, and free-floating object behavior. Reinforcement learning agents (using PPO and SAC algorithms) train across millions of randomized episodes with extensive domain randomization—varying object masses, surface friction, lighting, camera noise, and actuator latency—to produce policies robust enough to transfer directly to physical hardware. A residual policy adaptation layer fine-tunes behavior with minimal real-world data collected during initial on-orbit checkout. This sim-to-real pipeline dramatically compresses the time and cost of deploying new manipulation capabilities, allowing General Astronautics to onboard new laboratory instruments and protocols in days rather than months.
It's like a pilot logging thousands of hours in a flight simulator so realistic that when they finally sit in the real cockpit, they fly perfectly on day one—except the cockpit is floating in space.
Combines SpaceX flight hardware experience with deep industrial robotics and autonomous systems expertise. Bridges the gap between traditional space manipulators and modern AI-driven automation.