i10e, January 2026 memo

It’s an exciting time in robotics.

In 2025, we’ve seen an accelerating trend of robotics companies spinning up all around the world. But while their hardware is sophisticated, the intelligence software still remains primitive, lagging at least 10 years behind. Intelligence is the bottleneck preventing robots from becoming useful. The leading labs have been forced to pivot their plans from shipping robots with autonomy, to relying on tele-operation while they gather more data.

I believe the problem is not with a lack of data or compute, but with the underlying architecture. Current state-of-the-art robots can only perform narrowly-scoped tasks in a controlled environment — such as folding shirts or operating a coffee machine. And that’s with intense training on millions of data points for each of those tasks. These trade show demos are cool to watch for a robot nerd like me, but the progress is too slow for the industry, and severely limits what today’s hardware is capable of. I think the potential can be much greater.

We know that humans don’t learn the same way as AI. We build on existing knowledge, and adapt quickly to learn new things. By contrast, today’s AI models are static and fully connected — every neuron is trained at once, and every neuron fires for every task. This is fundamentally different from how our brains work. What if machines could learn more like humans?

In 2026, I’m starting an intelligence research lab, called i10e. The name is a numeronym for “intelligence”. Our mission is to build a new architecture for intelligence, inspired by the brain, which will enable robots to understand the world, and learn from experience.

If successful, we’ll unlock the full potential of robotics and opportunities beyond. Which will lead to an abundance of economic opportunity across industries, and a new age of intelligence.


Where the industry is today

NVIDIA CEO Jensen Huang Keynote at CES 2025 NVIDIA CEO Jensen Huang Keynote at CES 2025

Last year, I travelled around the world and met with many robotics founders and teams. Here’s what I’ve learned about the space.

Demos are often misleading; autonomy is not anywhere close. People might assume Tesla, Figure, or 1X almost have figured out autonomy because of their marketing. This is often fabricated or misleading, and occasionally the covers come off.

Tesla, in all demonstrations, creates the impression their robots are autonomous, while actually being tele-operated by a person in the next room. Leading to the occasional failure.

Figure’s marketing this year shows their robot filling a washing machine in two different environments, to demonstrate adaptability – a key problem that robots struggle with. But if you look closely, it’s the same sequence of actions, the same model of washing machine, the same basket of clothes.

When 1X gave their TED talk in April 2025, their humanoid Neo pottered around the stage, demonstrating everyday abilities like watering flowers and fetching things, without them disclosing that this was entirely tele-operated. When I met with their founder at their office in Palo Alto over summer, he wasn’t willing to show me a single autonomous demonstration — with a hand-wavey explanation that they would have done, but it would be a lot of work to set up a unit. A few months later, WSJ had the same experience, receiving an entirely tele-operated demo.

Quoting Chris Paxton, a long-time industry analyst and insider:

The general rule when watching any robot video is “it can do exactly what you see it do, and literally nothing else”

Almost every lab is taking the same technical approach to solve intelligence, and nobody has a lead. This is a research space, where innovations are published through papers and conference talks, and people move around a lot.

The algorithms and methods used are the same across the industry. The technical approach is typically a mix of Vision-Language-Action models (an adapted version of LLMs), trained on tele-operation or sim data, and mixed in with some Reinforcement Learning. Everyone is facing the same common challenges: the “sim2real gap”, dynamic environments, the need to collect massive amounts of real-world data.

The progress is very incremental, and the current capabilities are primitive. Nobody has a significant lead. Intense amounts of data collection and training is not leading to models generalising, due to how complex the real world is. They’re still limited to narrow tasks in controlled environments. Perhaps this has surprised the robotics companies, who imagined that if they invested in both, the intelligence software would also arrive at the same time as the hardware. But right now, it’s a blocker for commercialisation.

I believe an intelligence breakthrough is required for these robots to become useful and commercially valuable. I believe this breakthrough is in a new architecture, more inspired by the brain. We know that brains can solve all of these problems. Even low-intelligence animals like mice are able to understand the world, and quickly adapt to problems they haven’t seen before – whereas a highly-trained VLA model cannot do this. The key architectural differences are that LLMs are linear and fully-connected static functions, calling on every neuron for every new token. Whereas a brain is non-linear, sparsely connected and dynamic, only calling on neurons when relevant, and constantly changing its structure as we have new experiences. We’re able to learn new things without forgetting other important things, and we can adapt existing knowledge to understand new experiences quickly.

My first major goal with i10e is to build a new type of intelligence inspired by the brain, and to demonstrate this working at a small scale. I believe this is the breakthrough we need to enable robots which can understand the world, adapt existing knowledge, and learn from experience.


Why I’ve decided to work on this

There are many smart researchers capable of discovering this breakthrough. But almost everyone researching intelligence today is focusing on a narrow band of work around LLM architecture.

There are only two others who come to mind who are making wider efforts on intelligence:

  • Keen Technologies, from John Carmack. John has talked on Lex Fridman’s podcast about his vision that “this will be invented by a small team, a few thousand lines of code, 5-6 key breakthroughs.” Their focus today is on advancements in Reinforcement Learning.
  • AMI Labs, from Yann LeCun, who recently left Meta, and famously believes LLMs are a dead end towards AGI.

This is a wide search space with very few players. I’d love to see more competition.

I don’t have a PhD or a typical academic background. But I have invented several technologies that had eluded researchers despite decades of effort.

I developed AR navigation by combining GPS and AR SLAM, with smart algorithms to enable real-world overlays, like street directions and points of interest. It became the largest open-source project for Apple’s ARKit and CoreLocation frameworks, and became embedded in Apple and Google Maps.

At Hyper, I invented precise indoor location by fusing AR SLAM with WiFi signals to eliminate noise and drift, giving us a precise, reliable system. The results amazed customers who had been frustrated by other solutions, and it led us to a global rollout contract with IKEA.

In both cases, I worked from a clear technical vision and first-principles thinking to solve these problems. I gained deep experience with robotics technologies like SLAM localization, motion sensors, and reinforcement learning. And I also learnt how to build a sophisticated research org with the right infrastructure and pipeline to accelerate development.

There are few times in life when we have a clear vision on a problem, where we’re able to see ten or twenty steps ahead of others, making us impossible to compete with. Robot intelligence is one of those areas for me. I’m excited to bring my experience and vision to this problem.


The Opportunity

In this first year, I will focus on initial research:

  • Research new architectures, to arrive at some initial “signs of life”.
  • Create initial marketing content to build our profile and attract future team, investment, partners etc.
  • Build connections within the intelligence and robotics space, continuing to meet founders and teams around the world, attending events.
  • Find a founding team, including a CEO. After this initial period, I will become co-founder and CTO, or another role that enables me to focus on product, technology, research and marketing.
  • Make progress on initial commercial strategy — which will be informed by how the technology works, and how the space evolves.

This graphic shows the $bn valuations of nascant “robotic brains” startups founded within the last few years:

Robotics is growing quickly, with massive future potential. The leading players in technology have either already entered the space, have majorly invested, or will do in the near future. Jeff Bezos has invested in three startups on this list, and another of them were acquired by Amazon.

Large-scale acquisitions are common in AI and robotics, especially when a company has novel technology or talent to accelerate their efforts. A few examples:

The upside opportunities for i10e are a large acquisition for our intelligence technology, or we build into an OpenAI-level startup, where we sell intelligence for a billion robots.


I am raising a small F+F round, around £250k, which gives up to 18 months of runway – enough time to support ths initial research and then raising a larger round. I am not advertising this, this is limited to angel investors I have worked with before.