Evaluating and Understanding Adversarial Robustness in Deep Learning

Evaluating and Understanding Adversarial Robustness in Deep Learning
Author :
Publisher :
Total Pages : 175
Release :
ISBN-10 : OCLC:1291135695
ISBN-13 :
Rating : 4/5 ( Downloads)

Book Synopsis Evaluating and Understanding Adversarial Robustness in Deep Learning by : Jinghui Chen

Download or read book Evaluating and Understanding Adversarial Robustness in Deep Learning written by Jinghui Chen and published by . This book was released on 2021 with total page 175 pages. Available in PDF, EPUB and Kindle. Book excerpt: Deep Neural Networks (DNNs) have made many breakthroughs in different areas of artificial intelligence. However, recent studies show that DNNs are vulnerable to adversarial examples. A tiny perturbation on an image that is almost invisible to human eyes could mislead a well-trained image classifier towards misclassification. This raises serious security concerns and trustworthy issues towards the robustness of Deep Neural Networks in solving real world challenges. Researchers have been working on this problem for a while and it has further led to a vigorous arms race between heuristic defenses that propose ways to defend against existing attacks and newly-devised attacks that are able to penetrate such defenses. While the arm race continues, it becomes more and more crucial to accurately evaluate model robustness effectively and efficiently under different threat models and identify those ``falsely'' robust models that may give us a false sense of robustness. On the other hand, despite the fast development of various kinds of heuristic defenses, their practical robustness is still far from satisfactory, and there are actually little algorithmic improvements in terms of defenses during recent years. This suggests that there still lacks further understandings toward the fundamentals of adversarial robustness in deep learning, which might prevent us from designing more powerful defenses. \\The overarching goal of this research is to enable accurate evaluations of model robustness under different practical settings as well as to establish a deeper understanding towards other factors in the machine learning training pipeline that might affect model robustness. Specifically, we develop efficient and effective Frank-Wolfe attack algorithms under white-box and black-box settings and a hard-label adversarial attack, RayS, which is capable of detecting ``falsely'' robust models. In terms of understanding adversarial robustness, we propose to theoretically study the relationship between model robustness and data distributions, the relationship between model robustness and model architectures, as well as the relationship between model robustness and loss smoothness. The techniques proposed in this dissertation form a line of researches that deepens our understandings towards adversarial robustness and could further guide us in designing better and faster robust training methods.


Evaluating and Understanding Adversarial Robustness in Deep Learning Related Books

Evaluating and Understanding Adversarial Robustness in Deep Learning
Language: en
Pages: 175
Authors: Jinghui Chen
Categories:
Type: BOOK - Published: 2021 - Publisher:

GET EBOOK

Deep Neural Networks (DNNs) have made many breakthroughs in different areas of artificial intelligence. However, recent studies show that DNNs are vulnerable to
Advances in Reliably Evaluating and Improving Adversarial Robustness
Language: en
Pages:
Authors: Jonas Rauber
Categories:
Type: BOOK - Published: 2021 - Publisher:

GET EBOOK

Machine learning has made enormous progress in the last five to ten years. We can now make a computer, a machine, learn complex perceptual tasks from data rathe
Adversarial Robustness for Machine Learning
Language: en
Pages: 300
Authors: Pin-Yu Chen
Categories: Computers
Type: BOOK - Published: 2022-08-20 - Publisher: Academic Press

GET EBOOK

Adversarial Robustness for Machine Learning summarizes the recent progress on this topic and introduces popular algorithms on adversarial attack, defense and ve
Improved Methodology for Evaluating Adversarial Robustness in Deep Neural Networks
Language: en
Pages: 93
Authors: Kyungmi Lee (S. M.)
Categories:
Type: BOOK - Published: 2020 - Publisher:

GET EBOOK

Deep neural networks are known to be vulnerable to adversarial perturbations, which are often imperceptible to humans but can alter predictions of machine learn
On the Robustness of Neural Network: Attacks and Defenses
Language: en
Pages: 158
Authors: Minhao Cheng
Categories:
Type: BOOK - Published: 2021 - Publisher:

GET EBOOK

Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples. That is