In a recent interview, Inseung Kang, an assistant professor of mechanical engineering at Carnegie Mellon University, said something that would surprise most people watching the exoskeleton hype cycle: “The huge bottleneck is the software side of things. Building exoskeletons has been pretty well studied now. Physical mechanical designs have converged.”
At CES 2026, that convergence was on full display. Nineteen booths showcased exoskeletons or similar wearable robots. German Bionic unveiled Exia, a robotic exoskeleton trained on billions of anonymized motion data points, providing 84 pounds of dynamic lifting assistance. Hypershell has sold over 20,000 units across its consumer exoskeleton lineup — from the $999 Go X to the $1,799 Carbon X — to hikers and outdoor enthusiasts worldwide.
Yet the technology that determines whether a wearable robot actually helps you walk, climb, or recover from a stroke is not the motor. It is the intelligence behind it, the model that decides when to push, how hard, and in which direction. That intelligence is now being built with the same tools reshaping humanoid robotics: reinforcement learning, simulation, neural networks, and sensor fusion. The difference is that wearable robots must solve a harder version of the problem, because the human body is already inside the loop.
The Muscle No One Can See
Kang’s diagnosis of the field is precise. The mechanical designs of hip, knee, and ankle exoskeletons have stabilized. Companies and labs are using similar actuators, similar form factors, similar weight distributions. The controller, the software brain that interprets sensor data and generates assistive torque, is where differentiation happens.
The difficulty comes from the sheer ambiguity of human movement. When a person wearing an exoskeleton shifts their weight, the machine needs to determine within milliseconds whether they are about to walk, climb stairs, sit down, or simply adjust their posture. Kang calls this “user intent recognition,” and it has consumed most of his research career.
The conventional approach relies on onboard kinematic sensors: inertial measurement units that track limb orientation, joint encoders that measure angle, and sometimes pressure insoles that detect ground reaction forces. These sensors tell the robot where the body is, but not where it is going. A model-based controller maps these readings into assistive torque through analytical equations. It works for steady-state treadmill walking. It fails for the unpredictable transitions of daily life.
Kang’s own research, published in Science Robotics in March 2024 with Dean Molinaro and Aaron Young, demonstrated a unified alternative. A temporal convolutional network directly estimates human joint moments from sensor data across 35 different ambulatory conditions, including level walking, inclines, stair climbing, and transitions between them. The key result: the network achieved an average error of just 0.142 newton-meters per kilogram with no user-specific calibration. It did not need to be told what activity was happening. It simply estimated the forces the human body was producing and generated corresponding assistance. The exoskeleton reduced metabolic cost across all tested conditions compared to unassisted walking.
This is meaningful because it collapses a problem that previously required multiple specialized controllers into a single neural network. The vision is a wearable robot that does not need to classify your activity. It just reads the underlying physics of your body.
Listening to the Spinal Cord
The sensor that captures body physics most directly is also the hardest to use. Surface electromyography, or sEMG, measures the electrical signals that muscles produce before and during contraction. Unlike IMUs, which detect the result of movement, EMG captures the intention, the neural command that precedes the physical action by tens of milliseconds.
A 2025 review in Sensors cataloged the state of EMG-based intent recognition for exoskeletons. Deep learning architectures have pushed classification accuracy above 97% for structured movements. A CNN trained on textile-based sEMG electrodes achieved 99.26% accuracy classifying ankle exoskeleton motions. A CNN-LSTM model with soft bioelectronics predicted four upper-limb joint motions at 96.2% accuracy with a 500-millisecond response time, running on cloud-based inference.
But accuracy in a lab and reliability on a human body are different things. EMG signals degrade with muscle fatigue, a common occurrence in the clinical populations that need exoskeletons most. A 2025 study in Scientific Reports tackled this by fusing EMG with electroencephalography, combining muscle signals with brain signals to maintain intent detection accuracy even as muscles tire. The hybrid interface adapts its control strategy in real time.
The frontier, however, is deeper. Dario Farina’s lab at Imperial College London bypasses the surface-level EMG signal entirely. Through advanced blind source separation algorithms, Farina can decompose a noisy stream of muscle electrical activity into the firing patterns of individual motor neurons, the final common pathway of the spinal cord. In a January 2025 paper in Science Robotics, his team showed that mapping these motoneuron synergies to a soft prosthetic hand achieved an 82.5% hit rate for target postures, compared to 35% when using conventional muscle-level signals. The neural code outperformed the muscular one by a factor of two.
Hugh Herr at MIT pursued the inverse strategy: instead of better sensors, build a better body. His Agonist-Antagonist Myoneural Interface, or AMI, is a surgical procedure that reconnects severed muscle pairs in an amputated limb, restoring the stretch-reflex loop that normally sends proprioceptive signals to the brain. In a 2024 Nature Medicine study, AMI patients wearing a powered ankle prosthesis walked 41% faster than conventionally amputated patients, matching the peak speeds of people with no amputation. The prosthesis was not running a smarter algorithm. The nervous system was running the prosthesis.
Kang’s own vision goes further still. “I am a huge believer of neural interface,” he said. “Companies like Neuralink are doing some good work on looking at the cortical activities. Ultimately we would have access to what’s happening at the brain level, what’s happening at the spine level, and what’s happening at the muscle level directly, so that we can close the loop.”
That three-level readout, brain, spinal cord, and muscle, is the theoretical ideal for wearable robot control, as a Nature Communications review on next-generation wearable robots articulates in detail. It is also, for now, entirely impractical outside a laboratory. Kang himself acknowledges the trade-off: “Imagine you’re having a person wear an exoskeleton that requires 50 sensors. A person might not want to do it because it takes 30 minutes to don the system.” The tension between neural fidelity and real-world usability defines the field.
Falling Safely Inside a Computer
If neural sensing is the long game, simulation is the shortcut, and it is arriving faster than anyone expected.
The problem is straightforward. To train an exoskeleton controller with reinforcement learning, you need data about how humans move under assistance. Collecting that data the traditional way means recruiting subjects, strapping them into a device, and having them walk, run, and stumble for hours while sensors record everything. For clinical populations, this is painful and sometimes dangerous. As Kang puts it: “In order to create a model that can help individuals with motor impairments to fall, you’re going to recruit stroke survivors and have them come into the lab and have them fall many, many times. I just don’t think that’s feasible.”
A 2024 paper in Nature by Shuzhen Luo and colleagues at NC State and NJIT showed that feasibility is no longer the constraint. They built a 50-degree-of-freedom full-body musculoskeletal model with 208 skeletal muscles and a mechanical model of a custom hip exoskeleton. Three neural networks work in sequence: one imitates human motion patterns, one replicates muscular coordination responses, and one generates exoskeleton torque profiles. The entire system was trained on a single NVIDIA RTX 3090 GPU in eight hours.
The result was a controller that, when deployed on a real hip exoskeleton, reduced metabolic cost by 24.3% for walking, 13.1% for running, and 15.4% for stair climbing. No human had participated in the training process. The controller generalized across subjects through domain randomization, randomly varying the musculoskeletal model’s parameters to produce a policy robust to individual differences.
A November 2025 paper in Science Robotics from Georgia Tech pushed the concept further. Keaton Scherpereel and colleagues used a CycleGAN, a neural network originally designed to translate between satellite and street-level images, to convert datasets of people walking without exoskeletons into synthetic data of people walking with them. The resulting controller was deployed on a hip-knee exoskeleton and reduced metabolic cost by 9.5 to 14.6% across eight participants. No device-specific data collection was needed. Aaron Young, the senior author, called this “the big advance,” with applications extending to prostheses and autonomous robots.
The significance of these results becomes clear when placed alongside the Physical AI revolution happening in humanoid robotics. The same sim-to-real pipeline that trains a humanoid to walk, simulation, domain randomization, neural network policy, zero-shot transfer, now works for exoskeletons. But Kang cautions that the gap is wider: “The sim-to-real gap is much greater on human-robot interaction type of research, because there are humans involved. Just because you can represent human walking in simulation doesn’t mean that we can completely map how humans respond to exoskeleton assistance.”
The Luo and Scherpereel papers suggest this gap is narrowing. The key technical insight is the same one that enabled experiment-free training: separate what the neural network needs to measure (sensor inputs available on the real robot) from what it needs to understand (joint moments, muscle activations) that exist only in simulation. If the separation is clean, the policy transfers.
A comprehensive Science Robotics review published in July 2025 by van der Kooij and colleagues synthesized these trends. The review identifies reinforcement learning combined with digital human twins as the most promising pathway to autonomous exoskeleton operation. It also notes that large language models could eventually translate high-level therapeutic goals into structured exercise programs, an intriguing but unproven idea. The honest assessment: the simulation tools exist, the neural network architectures work, but clinical validation on diverse patient populations remains thin.
The $1,000 Threshold
While researchers refine controllers in simulation, the market is moving.
Kang offers a measured perspective on the consumer exoskeleton space. “Hypershell is not the only company in this game,” he noted. “There are very good devices built by other companies like Skip, a spin-off startup from Google X, and a startup called Wii Robotics from Korea that has a high-functioning hip exoskeleton for elderly individuals.” He points out that the underlying algorithms across these devices are fairly similar. The real differentiator for Hypershell, in his view, is price: “They’ve significantly lowered the cost of these devices. Hypershell is about less than $1,000. That really is the huge benefit.”
German Bionic’s approach is different. Their Exia exoskeleton, unveiled at CES 2026, targets industrial workers rather than consumers. Its “Augmented AI” is trained on billions of anonymized motion data points from real working environments, providing context-aware assistance across walking, carrying, and lifting. The company calls it “Physical AI,” explicitly borrowing the language of the robotics industry to describe a wearable device that acts on the real world in real time.
Conor Walsh’s lab at Harvard has been shaping the clinical end of this spectrum for years. His soft exosuits use textile-based actuators and sensors rather than rigid frames, prioritizing transparency, the device should feel like clothing, not armor. A 2025 study demonstrated that human-in-the-loop optimization (HILO) could automatically tune exosuit assistance for individual patients with stroke and ALS, doubling the benefits compared to a generic controller. The research was co-developed with clinicians at Massachusetts General Hospital, which matters: the path from lab to patient runs through regulatory approval, and the FDA prefers interpretable, validated systems over black-box neural networks.
The commercial gap that Kang identifies maps to a familiar pattern in technology adoption: performance, personalization, price. The research has solved the first. The second is being addressed through simulation and HILO. The third is where Hypershell and its competitors are attacking, proving that a useful wearable robot can exist at a price point where ordinary consumers, not just military units or factories, will buy it.
The Unsolved Stack
Between the controllers that work in papers and the ones that work on patients, four technical layers remain partially or wholly unbuilt.
The first is personalization through digital twins. A musculoskeletal simulation is only as useful as its fidelity to the person wearing the device. Luo et al. used domain randomization to approximate inter-subject variability, but a randomized distribution is not the same as a measured individual. A 2024 study in Wearable Technologies built patient-specific musculoskeletal models driven by multichannel EMG clothing and five IMUs, estimating ankle torque for stroke patients with an R-squared of 0.65. That number is honest: two-thirds of the variance explained, one-third still opaque. A separate rehabilitation exoskeleton study demonstrated deep virtual-physical integration for personalized gait trajectory planning and real-time kinematic feedback. The challenge, as a 2025 review in Knee Surgery, Sports Traumatology, Arthroscopy makes clear, is that digital twin models must account for pathological changes, tissue degradation, and neural adaptation patterns that differ not just between patients but within the same patient over time.
The second is cross-user transfer through foundation models. In humanoid robotics, foundation models trained on diverse embodiment data can generalize across robots. Can the same principle work for exoskeletons? A 2025 study from RIKEN published in npj Robotics is the clearest evidence so far. A Transformer model that takes first-person camera images and knee-trunk kinematic data as input learned assistive strategies for walking, squatting, and stair climbing. The key result: strategies learned from one user generalized to another without retraining. Working from the opposite direction, HumanoidExo used data collected through wearable exoskeletons to pretrain vision-language-action models for humanoid robots, achieving complex whole-body manipulation from only five real-robot demonstrations. The implication runs both ways. If exoskeleton data can pretrain humanoid foundation models, humanoid foundation models might eventually run exoskeletons. And EEG-based transfer learning has already demonstrated that CNN models for lower-limb motor intention can generalize across subjects and sessions, reducing the training time needed for new users by two-thirds.
The third is multi-objective control. Kang’s hierarchy, energy first, stability second, preference third, agility last, describes the problem but not the solution. Current controllers optimize for one objective at a time. A 2025 paper in Scientific Reports tested RL-PID control on a soft exosuit and measured 12.9% metabolic reduction on flat ground and 10.7% on inclines, meaningfully outperforming traditional PID at 9.5% and 7.9%. A pediatric gait exoskeleton study published in MDPI Machines used Twin Delayed DDPG to dynamically adjust sliding-mode control gains in real time, reducing trajectory tracking error by 27.8% at the hip joint. These are advances, but they remain single-objective optimizations. The unsolved problem is a controller that simultaneously minimizes energy, prevents falls, respects patient preference, and adapts to fatigue, the hybrid system Kang described where energy-optimal control runs most of the time but stability intervention activates within milliseconds when a perturbation is detected. That architecture mirrors the dual-system design emerging in Physical AI, where a slow deliberative planner handles nominal behavior and a fast reactive policy handles exceptions.
The fourth is regulatory engineering. Wandercraft’s Atalante X received its second expanded FDA clearance in November 2025 for patients with spinal cord injuries (C4 through L5) and multiple sclerosis, based on a multicenter study of 547 training sessions. It is one of approximately 950 FDA-authorized AI and machine-learning-enabled medical devices. But the van der Kooij Science Robotics review is explicit about the gap: deep neural networks are harder to interpret than traditional control laws, and the FDA’s regulatory framework was not designed for controllers that learn. Kang frames this plainly: “At the end of the day, ML or any AI model is a statistical approach. There’s no such thing as a 100% guarantee.” His lab’s response is layered safety, physical joint limits, partial assistance capped at roughly 20% of the body’s own force, and the ability for the human to override the machine at any time. These are engineering guardrails. They work in practice. Whether they satisfy formal verification requirements for a next-generation device whose controller was trained entirely in simulation is an open question that no paper has yet answered.
What This Actually Means
Here is what the research data supports.
The software bottleneck in wearable robotics is the same bottleneck that VLA models solved for humanoid robots, but harder. Humanoid robots control their own bodies in unconstrained space. Wearable robots must predict, interpret, and gently augment a biological body they do not control. The shift from hand-coded controllers to neural networks trained on joint moment estimation, as demonstrated by Molinaro, Kang, and Young, is the wearable equivalent of the shift from task-specific programs to foundation models in humanoid AI. But the wearable version demands safety guarantees that neural networks cannot yet provide.
Simulation-first controller design is no longer theoretical for wearable robots. The Luo et al. Nature paper proved that a musculoskeletal simulation, a reinforcement learning policy, and domain randomization can produce a functional exoskeleton controller with zero human experiments. The Scherpereel et al. Science Robotics paper proved that CycleGAN-based domain adaptation can eliminate the need for device-specific datasets entirely. These are not demonstrations. They are engineering methods that other labs and companies can now replicate.
The neural interface frontier is real but distant. Farina’s motor unit decomposition and Herr’s AMI surgery represent genuine breakthroughs in how machines read the human nervous system. Both work in controlled settings. Neither is ready for a consumer product. The practical near-term path runs through IMU-based kinematic sensing augmented with deep learning, as the Georgia Tech and CMU work demonstrates. EMG adds value but introduces complexity. Brain-computer interfaces remain a vision.
The consumer exoskeleton market has crossed a threshold. Twenty thousand units sold, 19 booths at CES 2026, and sub-$1,200 pricing are not a prototype ecosystem. They are the early phase of a consumer electronics category. The controllers running on these devices are still relatively simple compared to the research frontier. The competitive advantage will accrue to whoever closes the gap between lab-grade intelligence and consumer-grade hardware first.
The unsolved stack between lab results and clinical deployment is deeper than any single paper addresses. Digital twins that track a patient’s body as it changes. Foundation models that transfer assistive strategies across users without retraining. Multi-objective controllers that balance energy, stability, preference, and fatigue in real time. And a regulatory framework that can certify a device whose controller was trained in simulation and updated through learning. Each of these layers has seen progress in 2024 and 2025. None is complete. Kang ranks his control objectives, stability first, energy second, preference third, agility last, because he works with patient populations. A logistics company would invert that list. The application determines the engineering.
Kang described his ultimate vision this way: an exoskeleton and clinician in a closed loop, where your calendar already books the appointment, the device is already personalized to your body, and you walk out recovered. That vision sits at the intersection of AI, biomechanics, neural engineering, and medicine. It also sits at the intersection of genuine help and uncomfortable questions about what it means when a machine knows your body better than you do.
“We had a GPT moment recently,” Kang said. “Maybe in the next five to ten years, we’ll have the next GPT moment in wearable robotics.” The hardware is ready. The algorithms are arriving. What remains is the much harder work of fitting intelligence to flesh.
References
[1]. Inseung Kang, CMU Interview on Wearable Exoskeletons (YouTube)
[2]. Molinaro, Kang, Young, “Estimating human joint moments unifies exoskeleton control, reducing user effort” (Science Robotics, 2024.03)
[3]. Luo et al., “Experiment-free exoskeleton assistance via learning in simulation” (Nature, 2024.06)
[4]. Herr et al., “Continuous neural control of a bionic limb restores biomimetic gait after amputation” (Nature Medicine, 2024.07)
[5]. Farina et al., “Merging motoneuron and postural synergies in prosthetic hand design” (Science Robotics, 2025.01)
[6]. Review of sEMG for Exoskeleton Robots: Motion Intention Recognition (Sensors, 2025.04)
[7]. van der Kooij et al., “AI in therapeutic and assistive exoskeletons and exosuits” (Science Robotics, 2025.07)
[8]. Walsh Lab, “A wearable robot that learns” (Harvard SEAS, 2025.08)
[9]. Deep learning for ankle exoskeleton motion classification using sEMG and IMU (Scientific Reports, 2025.10)
[10]. Hybrid EMG-EEG interface for robust intention detection and fatigue-adaptive control (Scientific Reports, 2025.11)
[11]. Scherpereel et al., “Deep domain adaptation eliminates costly data for wearable robotic control” (Science Robotics, 2025.11)
[12]. Georgia Tech, “Real-World Helper Exoskeletons Just Got Closer to Reality” (2025.11)
[13]. NC State, “AI-Powered Simulation Training Improves Human Performance in Robotic Exoskeletons” (2024.06)
[14]. Intelligent upper-limb exoskeleton with soft bioelectronics and deep learning (npj Flexible Electronics, 2024)
[15]. German Bionic, “Exia Robotic Exoskeleton at CES 2026” (2026.01)
[16]. Exoskeleton Report, “Where to find all the exoskeletons at CES 2026” (2025.12)
[17]. Hypershell, AI-Powered Exoskeletons (2025-2026)
[18]. Shaping high-performance wearable robots for human motor and sensory reconstruction (Nature Communications, 2024)
[19]. Furukawa & Morimoto, “Transformer-based multitask assist control from first-person view for exoskeleton robots” (npj Robotics, 2025)
[20]. Zhong et al., “HumanoidExo: Scalable Whole-Body Humanoid Manipulation via Wearable Exoskeleton” (arXiv, 2025.10)
[21]. Dong et al., “Cross-domain prediction of lower limb voluntary movement intention via EEG transfer learning” (Frontiers in Bioengineering, 2024)
[22]. RL-PID control for soft exoskeleton hip assistance with metabolic cost reduction (Scientific Reports, 2025.02)
[23]. RL-based finite-time sliding-mode control for pediatric gait exoskeleton (MDPI Machines, 2025.07)
[24]. Wandercraft, “FDA indication extension for Atalante X: SCI (C4-L5) and MS” (2025.11)
[25]. Sensor-driven musculoskeletal digital twin for wearable robot control (Wearable Technologies, 2024)
[26]. Deep integration of digital twin for rehabilitation exoskeleton (Molecular & Cellular Biomechanics, 2025)
[27]. Diniz et al., “Digital twin in musculoskeletal science: current state and future directions” (Knee Surgery, Sports Traumatology, Arthroscopy, 2025)