Semantic Adversarial Deep Learning

Tommaso Dreossi, Somesh Jha, and Sanjit A. Seshia. Semantic Adversarial Deep Learning. In 30th International Conference on Computer Aided Verification (CAV), 2018.

Download

[pdf] 

Abstract

Fueled by massive amounts of data, models produced by machinelearning (ML) algorithms, especially deep neural networks, are being used in diverse domains where trustworthiness is a concern, including automotive systems, finance, health care, natural language processing, and malware detection. Of particular concern is the use of ML algorithms in cyber-physical systems (CPS), such as self-driving cars and aviation, where an adversary can cause serious consequences. <p> However, existing approaches to generating adversarial examples and devising robust ML algorithms mostly ignore the semantics and context of the overall system containing the ML component. For example, in an autonomous vehicle using deep learning for perception, not every adversarial example for the neural network might lead to a harmful consequence. Moreover, one may want to prioritize the search for adversarial examples towards those that significantly modify the desired semantics of the overall system. Along the same lines, existing algorithms for constructing robust ML algorithms ignore the specification of the overall system. In this paper, we argue that the semantics and specification of the overall system has a crucial role to play in this line of research. We present preliminary research results that support this claim.

BibTeX

@inproceedings{dreossi-cav18,
  author = {Tommaso Dreossi and Somesh Jha and Sanjit A. Seshia},
  title = {Semantic Adversarial Deep Learning},
  booktitle = {30th International Conference on Computer Aided Verification (CAV)},
  year = {2018},
  abstract = {Fueled by massive amounts of data, models produced by machinelearning 
(ML) algorithms, especially deep neural networks, are being used in diverse 
domains where trustworthiness is a concern, including automotive systems, 
finance, health care, natural language processing, and malware detection. Of particular 
concern is the use of ML algorithms in cyber-physical systems (CPS), 
such as self-driving cars and aviation, where an adversary can cause serious consequences. 
<p> 
However, existing approaches to generating adversarial examples and devising 
robust ML algorithms mostly ignore the semantics and context of the overall system 
containing the ML component. For example, in an autonomous vehicle using 
deep learning for perception, not every adversarial example for the neural network 
might lead to a harmful consequence. Moreover, one may want to prioritize 
the search for adversarial examples towards those that significantly modify the 
desired semantics of the overall system. Along the same lines, existing algorithms 
for constructing robust ML algorithms ignore the specification of the overall system. 
In this paper, we argue that the semantics and specification of the overall 
system has a crucial role to play in this line of research. We present preliminary 
research results that support this claim.},
}

Generated by bib2html.pl (written by Patrick Riley ) on Tue Apr 24, 2018 09:06:48