The auto-pilot conundrum: Self-driving cars raise concerns

What’s in a name?

According to U.S. Senator Edward Markey, one name in particular that is ambiguous and confusing, if not outright disingenuous to a dangerous degree, is Tesla’s branded Autopilot driver-assistance system. Since its introduction in 2016, fourteen crashes, three of them fatal, occurred domestically while the autopilot was engaged.

The lawmaker is asking the Elon Musk-run organization to enhance the system’s safety features and promote the changes via a wholesale rebranding and remarketing initiative. His goal is to reduce potential misuse by drivers overly reliant on automation.

Concerns Continue to Grow

The senator is not alone in his concerns. Questions surround the growing length of time that drivers are allowing their cars to drive themselves. More, not less, hands-off human interaction should be the priority, particularly with a system that is known to struggle with identifying the simplest of stationary objects.

Tesla continues to affirm its commitment to safety with the recent introduction of red light and stop sign warnings that alert inattentive drivers. Yet, these improvements remain dependent on the driver behind the wheel, even if the driver isn’t touching it.

Driver interaction with the system is paramount. Minimizing or entirely removing the human factor has already resulted in deaths. Technological innovations should make the lives of consumers better, not end them in a split second due to a preventable crash.

For now, the answer to the creeping problem should be increasing awareness of the limitations of self-driving systems.