Numerous recent studies have demonstrated how deep learning (DL) classifiers can be fooled by adversarial examples, in which an attacker adds perturbations to an original sample, causing the classifier to misclassify the sample. Adversarial attacks that render DNNs vulnerable in real life represent a serious threat, given the consequences of improperly functioning autonomous vehicles, malware filters, or biometric authentication systems. Studies in this dissertation explore different techniques to attack against DL models and make them more robust later.
Metrics
30 File views/ downloads
41 Record Views
Details
Title
Adversarial Attacks and Robustness in Deep Learning Models and Applications
Creators
Yigit Can Alparslan
Contributors
Edward Kim (Advisor)
Awarding Institution
Drexel University
Degree Awarded
Master of Science (M.S.)
Publisher
Drexel University; Philadelphia, Pennsylvania
Number of pages
88 pages
Resource Type
Thesis
Language
English
Academic Unit
Computer Science (Computing); College of Computing and Informatics; Drexel University
Other Identifier
991014961449004721
Research Home Page
Browse by research and academic units
Learn about the ETD submission process at Drexel
Learn about the Libraries’ research data management services