Logo image
Adversarial attacks and robustness in deep learning models and applications
Thesis   Open access

Adversarial attacks and robustness in deep learning models and applications

Yigit Can Alparslan
Master of Science (M.S.), Drexel University
Jun 2021
DOI:
https://doi.org/10.17918/00000565
pdf
Alparslan_Yigit_202150.42 MBDownloadView

Abstract

Robust control Computer Science
Numerous recent studies have demonstrated how deep learning (DL) classifiers can be fooled by adversarial examples, in which an attacker adds perturbations to an original sample, causing the classifier to misclassify the sample. Adversarial attacks that render DNNs vulnerable in real life represent a serious threat, given the consequences of improperly functioning autonomous vehicles, malware filters, or biometric authentication systems. Studies in this dissertation explore different techniques to attack against DL models and make them more robust later.

Metrics

37 File views/ downloads
64 Record Views

Details

Logo image