• Home
  • News
  • Coins2Day 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
Tech

Researchers Show How Simple Stickers Could Trick Self-Driving Cars

By
David Z. Morris
David Z. Morris
Down Arrow Button Icon
By
David Z. Morris
David Z. Morris
Down Arrow Button Icon
September 2, 2017, 12:24 PM ET

A team of researchers from four American universities has provided a troubling preview of how self-driving cars could be tricked into making dangerous mistakes. By altering street signs in ways that look innocuous to a human observer, the researchers were able to completely alter how an artificial intelligence interpreted them.

This isn’t just a matter of slapping paint across a sign. The team, whose work was first highlighted by Ars Technica, designed an attack algorithm that carefully tailors the visual “perturbations” to be applied to an existing sign. The alterations, made using standard color printing or stickers, look like either graffiti or general wear to a human. One example will be familiar to many urban drivers – stickers that change a standard Stop sign to instead read “Love Stop Hate.”

Get Data Sheet, Coins2Day’s technology newsletter.

But to the demonstration neural network used by the researchers, these ho-hum alterations became mind-bending hallucinations. In the most dramatic example of the bunch, a Stop sign altered with what looks like natural weathering was consistently seen by the neural network as a 45 mph speed limit sign. In another test, just four carefully-placed rectangular stickers caused a Stop sign to be seen as a speed limit sign only slightly less consistently.

Those misinterpretations could, obviously, be incredibly dangerous. And they held up under a wide variety of conditions, including different distances and viewing angles.

There are two caveats. The research has been publicly shared, but hasn’t yet been published with peer review. And the demonstration didn’t use any existing commercial self-driving or vision system – the researchers trained their own A.I. Using a library of sign images. While their work is a strong proof of concept, the researchers write in an accompanying FAQ that “this attack would most likely not work as-is on existing self-driving cars.”

About the Author
By David Z. Morris
See full bioRight Arrow Button Icon
Rankings
  • 100 Best Companies
  • Coins2Day 500
  • Global 500
  • Coins2Day 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Coins2Day Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Coins2Day Brand Studio
  • Coins2Day Analytics
  • Coins2Day Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Coins2Day
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Coins2Day Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Coins2Day Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.