Welcome to ShortScience.org!
RSS Feed Twitter Facebook
more


arxiv.org
arxiv-vanity.com
scholar.google.com
Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
Andrew Slavin Ross and Finale Doshi-Velez
arXiv e-Print archive - 2017 via Local arXiv
Keywords: cs.LG, cs.CR, cs.CV

more
[link]
Summary by David Stutz 7 years ago
Loading...
[link]
Summary by David Stutz 7 years ago
Loading...
arxiv.org
arxiv-vanity.com
scholar.google.com
Adversarial Vulnerability of Neural Networks Increases With Input Dimension
Carl-Johann Simon-Gabriel and Yann Ollivier and Léon Bottou and Bernhard Schölkopf and David Lopez-Paz
arXiv e-Print archive - 2018 via Local arXiv
Keywords: stat.ML, cs.CV, cs.LG, 68T45, I.2.6

more
[link]
Summary by David Stutz 7 years ago
Loading...
arxiv.org
arxiv-vanity.com
scholar.google.com
Biologically inspired protection of deep networks from adversarial attacks
Aran Nayebi and Surya Ganguli
arXiv e-Print archive - 2017 via Local arXiv
Keywords: stat.ML, cs.LG, q-bio.NC

more
[link]
Summary by David Stutz 7 years ago
Loading...
arxiv.org
arxiv-vanity.com
scholar.google.com
Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
Weilin Xu and David Evans and Yanjun Qi
arXiv e-Print archive - 2017 via Local arXiv
Keywords: cs.CV, cs.CR, cs.LG

more
[link]
Summary by David Stutz 7 years ago
Loading...


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: