In a demonstration that feels like it’s been lifted straight from the pages of a sci-fi thriller, a humanoid robot has been filmed turning the pages of a book using nothing more than its operator’s thoughts. The machine in question is the TienKung 3.0, a sophisticated new platform from Beijing-based X-Humanoid, and it’s being piloted via a non-invasive brain-computer interface (BCI). To be clear: there’s no surgery required here, and definitely no drilling into skulls—just a clever headset picking up on neural cues.
The footage highlights a potentially life-changing application for humanoid robotics: acting as a sophisticated proxy for individuals living with severe mobility impairments, such as paralysis or motor neurone disease (MND). While Elon Musk’s Neuralink has grabbed the lion’s share of the headlines with its high-stakes brain implants—allowing patients to nudge cursors or move robotic limbs—this latest showcase from X-Humanoid suggests a far less intrusive path forward. It’s a solution that neatly sidesteps the daunting risks and lengthy recovery times inherent in neurosurgery.
The TienKung 3.0 itself is a formidable bit of kit. Launched in February 2026 by the Beijing Innovation Center of Humanoid Robotics (X-Humanoid), this full-sized robot was built as an open platform to kickstart development across the industry. It’s packed with high-torque joints and sophisticated motion control, making it just as capable of trekking across uneven ground as it is performing delicate manual tasks. Pairing this robust hardware with a non-invasive BCI controller creates a compelling toolkit for genuine, real-world assistive tech.
Why does this matter?
The real game-changer here is the “non-invasive” tag. While invasive BCIs, like the ones being developed by Neuralink, offer higher-fidelity signals, they require the massive hurdle of brain surgery. Non-invasive systems, which typically rely on electroencephalography (EEG) caps to read electrical pulses through the scalp, slash the barrier to entry. They’re safer, more affordable, and infinitely more accessible.
This could be the catalyst that moves robotic assistants out of the sterile confines of the research lab and into the living rooms of those who need them most. We might still be a few years away from summoning a robot to fetch a cuppa with a mere thought, but this demonstration is a vital—and refreshingly practical—step towards that reality.













