Numerous recent studies have demonstrated how deep learning (DL) classifiers can be fooled by adversarial examples, in which an attacker adds perturbations to an original sample, causing the classifier to misclassify the sample. Adversarial attacks that render DNNs vulnerable in real life represent a serious threat, given the consequences of improperly functioning autonomous vehicles, malware filters, or biometric authentication systems. Studies in this dissertation explore different techniques to attack against DL models and make them more robust later.
Metrics
35 File views/ downloads
61 Record Views
Details
Title
Adversarial Attacks and Robustness in Deep Learning Models and Applications
Creators
Yigit Can Alparslan
Contributors
Edward Kim (Advisor)
Awarding Institution
Drexel University
Degree Awarded
Master of Science (M.S.)
Publisher
Drexel University; Philadelphia, Pennsylvania
Number of pages
88 pages
Resource Type
Thesis
Language
English
Academic Unit
Computer Science (Computing) [Historical]; College of Computing and Informatics (2013-2026); Drexel University