weird-tech
2/23/2026

The first cars bold enough to drive themselves

Before lidar domes and Level 4 dashboards, a Spanish inventor steered boats and tricycles by radio. That quirky lineage—from teleoperated curiosities to today’s robotaxis—shows autonomy grew from a century of bold hacks, not overnight magic.

Background

When people talk about self-driving cars, the timeline usually starts in Silicon Valley and ends with a lidar-topped minivan gliding through a sunlit intersection. The real story begins much earlier—and much stranger. At the dawn of the 20th century, engineers were already teasing the idea that you could move a vehicle without a human at the wheel, using radio waves, wires in the road, or machines that tried to "see."

One of the most influential early steps came from Spain. Leonardo Torres Quevedo—best known for pioneering work in aeronautics and automation—developed the Telekino in the early 1900s. It was a wireless control system with a clever coded signaling scheme and an onboard electromechanical decoder. In public demonstrations, he guided a small vehicle on land and a boat on the Nervión River near Bilbao without anyone on board. The device didn’t make a vehicle “think,” but it proved a powerful idea: driving commands can be transmitted, interpreted, and executed without a driver holding the wheel.

That distinction—between remote control and autonomy—matters. Early “driverless” exhibitions were often teleoperation stunts: a second vehicle or nearby operator sent commands, and the car obeyed. Autonomy, by contrast, means the vehicle perceives the world and decides its own actions within limits. Modern robotaxis still mix the two ideas: they drive themselves most of the time, but remote assistants can help in tricky moments.

Other early experiments added rungs to the ladder:

  • In 1898, Nikola Tesla wowed New York crowds by steering a small boat with a radio transmitter—an early proof that wireless control could be precise and reliable.
  • In 1925, a driverless car cruised through Manhattan traffic while a chase car sent radio commands—dramatic, imperfect, and undeniably attention-grabbing.
  • In the 1950s, researchers embedded wires in test roads so cars could “feel” where to go with magnetic pickups—primitive lane guidance decades before painted lanes and vision networks.
  • By the late 1970s, research robots began seeing the world through cameras and planning their own paths, albeit slowly.

The throughline is clear: before cars could perceive, they learned to listen. Before they could plan, they learned to follow. Each stage replaced a human function—feet, hands, then eyes and brain—with sensors, signal processing, and ultimately software.

What happened

Ars Technica revisits this odd, delightful prehistory of autonomy, putting special focus on Torres Quevedo’s Telekino as a milestone that pushed vehicles toward driverless behavior. Unlike earlier party tricks, the Telekino wrapped wireless control in an organized, coded protocol and a decoder on the vehicle. Instead of a human twisting mechanical linkages or yanking levers, a compact electromechanical brain translated sequences of radio pulses into discrete actions like “turn,” “accelerate,” or “stop.” It was both remote control and a hint of automation: human intent turned into a small vocabulary of machine instructions.

Seen from today’s vantage point, these demonstrations look like parlor games. But they forced hard engineering questions that still shape the field:

  • How do you encode the driver’s intent into signals a machine can trust?
  • How do you build fail-safes if the link goes down or a command is garbled?
  • What’s the boundary between human authority and machine execution?

Teleoperation demanded robust signaling and error handling; autonomy later demanded perception and prediction. The former taught us about command channels and safety; the latter added sensors and probabilistic reasoning. Put together, they form the architecture of modern driverless systems: perception, prediction, planning, control, all wrapped in communications and fail-safe behavior.

A timeline of daring machines

  • 1898 – Wireless beginnings: Tesla steers a small boat by radio at Madison Square Garden. For the first time, a mobile machine responds continuously to a human far away.
  • 1903–1906 – Telekino takes shape: Torres Quevedo patents and demonstrates a coded wireless control system that drives a land vehicle and a boat with onboard decoding and pre-defined actions.
  • 1920s – Urban theater: Radio-controlled “phantom” cars weave through city streets while a second vehicle shadows and transmits commands. Spectacular, but still teleoperation.
  • Late 1930s–1950s – The electronic highway: World’s Fairs and lab trials imagine roads embedded with wires or electronics; cars sense the guide and follow hands-free. Nebraska and other locales test wire-guided lanes in the late 1950s.
  • 1960s–1970s – Robots learn to see: Research platforms like Shakey at SRI and the Stanford Cart inch forward with crude vision and planning. Progress is slow, but the blueprint for autonomy emerges.
  • 1977 – Japan’s vision car: At Japan’s Mechanical Engineering Laboratory in Tsukuba, a camera-guided vehicle follows marked lanes on a controlled course at road-like speeds for the era.
  • 1980s–1990s – High-speed autonomy in Europe and the US: Ernst Dickmanns and colleagues in Germany field autobahn-capable vehicles that keep lanes, pass, and merge using cameras and dynamic vision. In the US, Carnegie Mellon’s Navlab and ALVINN projects push long-distance autonomous steering on highways.
  • 2004–2007 – The DARPA challenges: Desert and urban competitions turn academic prototypes into reliable autonomous stacks, normalized lidar, and hardened software.
  • 2009–present – Commercialization: Google’s project—later Waymo—industrializes the approach, pairing rich sensor suites with safety cases, mapping, and operations. Other players rise and stumble, but the template spreads: tightly geofenced, highly tested deployments in specific cities.

Why that early “weird tech” matters

  • It reframed “driving” as signal processing. Telekino’s codes, wire-guided lanes, and radio stunts carved driving into discrete commands that machines could execute.
  • It seeded the safety mindset. From the outset, inventors planned for lost signals, interference, and misinterpretation—foreshadowing today’s redundancy, watchdogs, and fallback maneuvers.
  • It normalized the idea that a vehicle’s intelligence could be offboard. A chase car sending commands is not so different in spirit from modern remote assistance or cloud-based route updates.
  • It bridged human and machine agency. Even now, commercial driverless fleets blend autonomy with human-in-the-loop support when uncommon situations crop up.

Key takeaways

  • Long before lidar and deep learning, inventors proved you could separate the act of “deciding to steer” from “turning the wheel.”
  • Torres Quevedo’s Telekino stands out because it encoded and decoded commands onboard—an early fusion of communications and control that foreshadowed autonomy.
  • The path to self-driving ran through radio-controlled stunts, wire-in-road guidance, and painfully slow vision systems—all incremental steps that swapped human roles for machine modules.
  • Modern robotaxis still rely on ideas born in that era: command channels, fail-safes, and the willingness to take the human out of the vehicle while keeping humans in the system.
  • Autonomy is not a binary. Today’s Level 4 services blend onboard decision-making with remote supervision and pre-mapped operational design domains (ODDs).
  • Public demos—then and now—are more than theater. They stress systems in messy environments, accelerate standards, and clarify what counts as “driverless.”
  • The most important innovations aren’t always visible. Coded signaling, watchdogs, and redundancy are as crucial as sensors and AI flair.

What to watch next

Technology and architecture

  • Remote assistance grows up: Expect more formal standards around “remote driving” versus “remote support,” with precise limits on what a human can authorize from afar and how latency is handled.
  • Sensing without excess: The stack is converging on multi-modal sensing (cameras, lidar, radar) with smarter fusion and fail-operational designs. Watch for lidars that are cheaper, radars with higher resolution, and learning-based perception that uses fewer annotated maps.
  • Learning to generalize: Tools that let fleets learn across cities—simulation at scale, self-supervised perception, and continual improvement pipelines—will matter more than any single sensor upgrade.

Policy and safety

  • Codifying safety cases: Regulators are shifting from prescriptive checklists to evidence-based safety cases. Companies will have to quantify risk, not just meet component specs.
  • Level 3 and beyond: Highway hands-off features under UN regulations and national laws are stepping stones. The tension between supervised Level 2/3 and unsupervised Level 4 will drive new compliance regimes.
  • Incident transparency: As fleets expand, incident reporting and independent data access will determine public trust. Expect stronger disclosure rules and standardized post-incident analyses.

Operations and economics

  • Fewer babysitters per car: The cost curve hinges on reducing remote interventions per thousand miles while keeping safety margins. Operations platforms—not just driving software—decide unit economics.
  • Weather and edge cases: All-season, all-city performance is still the frontier. Progress will be visible when services expand in snowy climates and handle unstructured construction zones gracefully.
  • Integration with public mobility: Driverless shuttles on fixed routes may pair with robotaxis to create reliable, affordable coverage without chasing every long-tail edge case.

Culture and acceptance

  • Clear driver models: People don’t just want safety; they want predictability. Vehicles that signal intent—through motion planning that “reads” human norms—will win hearts and regulators.
  • From novelty to utility: As rides get boring (in a good way) and prices settle, the technology will quietly shift from headline to habit—much like elevators once did.

FAQ

  • What’s the difference between teleoperation and autonomy?

    • Teleoperation means a human operator sends commands from elsewhere and the vehicle obeys. Autonomy means the vehicle senses, plans, and acts on its own within defined limits. Many real deployments blend the two, with autonomy doing the driving and humans available to help in rare situations.
  • Was Telekino really the first remote-controlled vehicle?

    • It was among the earliest practical, coded wireless control systems applied to vehicles, notable for onboard decoding and a defined command grammar. Other inventors, including Nikola Tesla, demonstrated radio control even earlier. Telekino’s significance is how it formalized the control channel in a way that hinted at automation.
  • Do modern robotaxis still use remote drivers?

    • They don’t drive the car like a video game in normal operation. However, remote staff can provide high-level assistance—confirming a route around a blockage or authorizing a maneuver—when the system requests help.
  • Why did early researchers try wires in the road?

    • Embedding guides in infrastructure simplified the problem: let the road tell the car where to go. It dodged hard perception tasks at the expense of costly upgrades and lack of flexibility. Today’s approach pushes more intelligence into the vehicle to work with existing roads.
  • What made the 2000s such a turning point?

    • The DARPA challenges catalyzed robust software stacks, fused sensors like lidar and radar with cameras, and built a community that could harden academic ideas into field-ready systems. That momentum carried into commercial programs.
  • What’s the hardest part now?

    • Scaling with reliability. Getting from one or two cities to dozens without safety regressions means better generalization, stronger operations, and transparent safety evidence.
  • Will we ever get cars that go everywhere, anytime?

    • Eventually, but likely in stages. Expect expansions of operational domains—more neighborhoods, more weather, more road types—rather than an overnight leap to everywhere.

Source & original reading

Ars Technica’s feature on early driverless experiments and Torres Quevedo’s Telekino: https://arstechnica.com/features/2026/02/the-first-cars-bold-enough-to-drive-themselves/