Luke's Blog

When Human Awareness Still Matters: Rethinking AI in Self-Driving Cars

February 17, 2026

Autonomous vehicle technology promises a future where cars drive themselves with little or no human input. Many proponents of AI driving systems claim these technologies will reduce accidents and make travel safer. In "The Silicon Blindfold: Why AI Driving is a Mechanical Liability," the author argues that we are not ready for that future because AI simply cannot "feel" the road in the way a human driver does, and that this technical limitation creates real safety risks. While I agree that self-driving systems have important limitations, I also think the broader conversation should consider both the challenges and potential benefits of this technology, rather than rejecting it outright.

Machines May Perceive differently, But Are They Always Inferior?

In "The Silicon Blindfold," the author argues that AI driving systems cannot sense subtle mechanical cues — such as slight changes in vibration or texture — that a human behind the wheel might notice. They claim that this inability to "feel" the road makes AI systems inherently unsafe, turning drivers into passive observers who have lost real awareness on the road.

This perspective highlights a real issue: current autonomous systems can only operate with the data their sensors provide. They do not have the phenomenological experience of human drivers — the sense of feeling a tire's resistance, the subtle shift in weight during a turn, or the micro-vibrations that signal a failing component. This limited perception is exactly what the author criticizes, and it reflects a larger conversation about the boundaries between machine perception and human embodied experience.

However, the argument oversimplifies how AI driving systems actually perceive and interact with the environment. These systems rely on advanced combinations of sensors — including cameras, radar, and often lidar — that can detect obstacles, interpret traffic signals, and measure distances with precision that humans simply cannot match consistently. A study analyzing autonomous vehicles found that in many conditions, such systems can reduce the likelihood of certain types of collisions compared to human drivers, highlighting that AI can complement rather than replace human perception entirely.

This doesn't mean AI driving is perfect — far from it — but it suggests that its value should be evaluated more holistically. Perception methods differ: human "feeling" of the road is not strictly superior to machine sensing; it is different. The question becomes not whether one is better across all situations, but how they can each be used where they are strongest.

Ethics and Responsibility Beyond the Driver's Seat

Another major point raised in "The Silicon Blindfold" is accountability. The author argues that even when AI is in control, humans remain legally responsible for what happens. This creates an ethical and legal dilemma: if a machine makes a decision that leads to harm, who should be held accountable — the human passenger, the manufacturer, or the software developer?

This question is not unique to autonomous driving. Similar debates have emerged in medicine, where doctors use AI tools to support diagnosis and treatment decisions. If an AI suggests a course of action and the doctor follows it, who is ultimately responsible for the outcome? These kinds of questions reveal a broader issue: as AI becomes more involved in life-critical decisions, responsibilities become shared and harder to define.

Ethics researchers have documented these challenges, noting that autonomous systems introduce new safety risks because they operate through algorithmic decision-making that can be opaque and unpredictable in edge-case scenarios — situations outside the norm that are difficult to test exhaustively. In traffic, this can mean misidentifying unusual objects, failing to anticipate sudden pedestrian behavior, or struggling with poor weather conditions — all real concerns that go beyond traditional human perception limitations.

It is reasonable to argue, as the original blog post does, that this raises questions about whether we should deploy AI systems before these ethical and technical issues are more fully resolved. At the same time, it is important not to frame the problem as simply a choice between perfect human drivers and flawed machines. Both humans and machines make errors; the issue is how we design systems that minimize harm, allocate responsibility fairly, and ensure that oversight mechanisms are in place.

Public Perception Reflects Real Safety Questions

Public skepticism about self-driving cars is not unfounded. A recent survey found that a large majority of Americans are not confident in the safety and reliability of autonomous vehicle technology, citing accident data and ongoing recalls as reasons for concern.

This skepticism mirrors the broader unease many people feel toward automation: the fear that machines are replacing human judgment without sufficient safeguards. In the context of driving, people's doubts often stem from widely publicized incidents where autonomous systems behaved unpredictably or failed to safely navigate complex environments. These concerns can reinforce the idea that AI systems are inherently unsafe or that human drivers must remain fully in control.

Yet research also suggests that autonomous technologies may outperform humans in many scenarios. For example, in controlled studies, certain models of self-driving vehicles were shown to reduce several types of crashes significantly when compared with conventional human-driven cars, especially in areas like rear-end and head-on collisions. This indicates that while autonomous systems struggle in some areas — such as low-visibility conditions — they excel in others.

The public perception issue is not just about technology being good or bad; it is about trust. People want transparency, accountability, and clear evidence that these systems will not introduce new risks while solving old ones. Simply dismissing AI driving as unsafe reinforces that distrust, whereas carefully acknowledging both its strengths and weaknesses can help build more informed conversations.

Autonomous Driving as a Partnership, Not a Replacement

I agree with the original blog's concern that autonomous driving systems should not blindly replace human awareness. But I also believe that treating AI as purely inferior or purely dangerous misses the most productive path forward: partnership.

Rather than seeing AI and humans as competitors for control of the wheel, we can design systems where humans and machines work together. In many current advanced vehicles, this is already happening: driver-assistance features improve safety by compensating for human limits without completely removing human agency.

This collaborative approach recognizes that neither humans nor machines have perfect perception. Humans bring common-sense judgment, ethical reasoning, and contextual understanding. Machines bring data processing, consistency, and rapid response capabilities. When these strengths are integrated thoughtfully, the result can be a safer and more resilient driving experience.

Some technologies are already moving in this direction. Remote assistance systems, for example, allow human professionals to intervene momentarily when an autonomous system encounters uncertainty, ensuring that high-stakes decisions are overseen by trained humans. This model reflects a future where control is shared rather than handed entirely over to AI or kept entirely by humans.

Toward Better Policy and Design

If society truly wants safer roads, the focus should be on policy, regulation, and design standards that reflect the complexity of autonomous driving — not on dismissing AI technology entirely. Researchers and policymakers are actively debating how to set liability standards that integrate reasonable human driver expectations with product-liability frameworks for AI systems.

Regulations, transparency in reporting accidents, and clearer definitions of accountability can help resolve the ethical ambiguities highlighted in "The Silicon Blindfold." At the same time, continuing research and development can improve autonomous technologies so they can handle more complex and unpredictable real-world scenarios.

Ultimately, like any powerful tool, autonomous driving systems deserve both careful scrutiny and thoughtful development. They raise real safety, perception, and ethical questions — but they also offer opportunities to reduce certain types of human error and reimagine mobility for people who cannot drive for themselves.

Bieber, Christy J.D. "93% Have Concerns About Self-Driving Cars – According to a New Forbes Legal Survey." Forbes Advisor, published 2024.

Butcher, Adam. "Why Are Self-Driving Cars Dangerous?" AEANET, published 2025.