Thesis
Adversarial attacks and robustness in deep learning models and applications
Master of Science (M.S.), Drexel University
Jun 2021
DOI:
https://doi.org/10.17918/00000565
Abstract
Numerous recent studies have demonstrated how deep learning (DL) classifiers can be fooled by adversarial examples, in which an attacker adds perturbations to an original sample, causing the classifier to misclassify the sample. Adversarial attacks that render DNNs vulnerable in real life represent a serious threat, given the consequences of improperly functioning autonomous vehicles, malware filters, or biometric authentication systems. Studies in this dissertation explore different techniques to attack against DL models and make them more robust later.
Metrics
37 File views/ downloads
64 Record Views
Details
- Title
- Adversarial attacks and robustness in deep learning models and applications
- Creators
- Yigit Can Alparslan
- Contributors
- Edward Kim (Advisor)
- Awarding Institution
- Drexel University
- Degree Awarded
- Master of Science (M.S.)
- Publisher
- Drexel University; Philadelphia, Pennsylvania
- Number of pages
- 88 pages
- Resource Type
- Thesis
- Language
- English
- Academic Unit
- Computer Science (Computing) (2013-2026); College of Computing and Informatics (2013-2026); Drexel University
- Other Identifier
- 991014961449004721