【AI安全周刊】2022年5月第四期

5月第4周

往期回顾:

2022年1月-2月

  1. Alexa versus Alexa: Controlling Smart Speakers by Self-Issuing Voice Commands https://mp.weixin.qq.com/s/mF3N4vMMxO1X2uNHln8zEg
  2. 【论文】Model Stealing Attacks Against Inductive Graph Neural Networks在本文中,作者主要关注节点分类任务 (node classification task),并提出了第一个针对Inductive GNN的模型窃取攻击(model stealing attack)。 论文链接: https://arxiv.org/pdf/2112.08331.pdf 代码链接: https://github.com/xinleihe/GNNStealing
  3. blackhat 2021的议题: 禅与对抗性机器学习的艺术
    Zen and the Art of Adversarial Machine Learning 主要是介绍攻击机器学习模型的原理。
    https://www.blackhat.com/eu-21/briefings/schedule/#zen-and-the-art-of-adversarial-machine-learning-24746
  4. blackhat 2021 中,百度 USA 的研究员 发表了一个议题。 AIModel-Mutator: Finding Vulnerabilities in TensorFlow 他们使用了创建了一个安全评估工具AIModel-Mutator用来发现Tensorflow的漏洞,最后发现了4个漏洞。https://www.blackhat.com/eu-21/briefings/schedule/#aimodel-mutator-finding-vulnerabilities-in-tensorflow-24620
  5. Breaching – 联邦学习隐私攻击框架 https://github.com/JonasGeiping/breaching
  6. 【论文】语音合成攻击-“‘Hello, It’s Me’: Deep Learning-based Speech Synthesis Attacks in the Real World”https://mp.weixin.qq.com/s/MK0lrMfwVfwclaCddG4ySQ
  7. Adversarial examples to the new ConvNeXt architecture
    https://github.com/stanislavfort/adversaries_to_convnext
  8. 模型量化攻击 https://mp.weixin.qq.com/s/7wGx-68wEQc5dmF_WTmW2Q
  9. 【论文】Attacking Video Recognition Models with Bullet-Screen Comments 通过发弹幕扰乱检测模型的注意力以攻击视频识别模型 https://arxiv.org/abs/2110.15629

发表评论