Classifiers Deep learning Defenses Real-world adversarial attacks
Recently, physical domain adversarial attacks have drawn significant attention from the machine learning community. One important attack proposed by Eykholt et al. can fool a classifier by placing black and white stickers on an object such as a road sign. While this attack may pose a significant threat to visual classifiers, there are currently no defenses designed to protect against this attack. In this paper, we propose new defenses that can protect against multi-sticker attacks. We present defensive strategies capable of operating when the defender has full, partial, and no prior information about the attack. By conducting extensive experiments, we show that our proposed defenses can outperform existing defenses against physical attacks when presented with a multi-sticker attack.
Metrics
8 Record Views
5 citations in Scopus
Details
Title
Defenses Against Multi-sticker Physical Domain Attacks on Classifiers
Creators
Xinwei Zhao - Drexel University
Matthew C. Stamm - Drexel University
Publication Details
Computer Vision – ECCV 2020 Workshops, pp 202-219
Series
Lecture Notes in Computer Science
Publisher
Springer International Publishing; Cham
Resource Type
Book chapter
Language
English
Academic Unit
Electrical and Computer Engineering
Scopus ID
2-s2.0-85101422858
Other Identifier
991019173675504721
Research Home Page
Browse by research and academic units
Learn about the ETD submission process at Drexel
Learn about the Libraries’ research data management services