AIPwn
Subscribe
Sign in
Home
AIBounty
HackingLLM
HackingNeuralNetworks
Archive
About
Page not found
New
Top
[paper]Logits of API-Protected LLMs Leak Proprietary Information
Potential Information Leakage in API-Protected LLMs
Mar 18
•
aipwn
Share this post
[paper]Logits of API-Protected LLMs Leak Proprietary Information
aipwn.org
Copy link
Facebook
Email
Note
Other
[paper] ImgTrojan: Jailbreaking Vision-Language Models with ONE Image
"ImgTrojan: Jailbreaking Vision-Language Models with ONE Image," the introduction of a novel attack mechanism against Vision-Language Models (VLMs) is…
Mar 14
•
aipwn
Share this post
[paper] ImgTrojan: Jailbreaking Vision-Language Models with ONE Image
aipwn.org
Copy link
Facebook
Email
Note
Other
OpenAI Introduces Multi-Factor Authentication for AI Conversations
Is your OpenAI account safer now?
Mar 12
•
aipwn
1
Share this post
OpenAI Introduces Multi-Factor Authentication for AI Conversations
aipwn.org
Copy link
Facebook
Email
Note
Other
[paper] Here Comes The AI Worm: Unleashing Zero-click Worms that Target GenAI-Powered Applications
ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications
Mar 11
•
aipwn
Share this post
[paper] Here Comes The AI Worm: Unleashing Zero-click Worms that Target GenAI-Powered Applications
aipwn.org
Copy link
Facebook
Email
Note
Other
A Safe Harbor for Independent AI Evaluation
We make AI safer
Mar 5
•
aipwn
Share this post
A Safe Harbor for Independent AI Evaluation
aipwn.org
Copy link
Facebook
Email
Note
Other
[paper]Watermark Stealing in Large Language Models
In this paper, identifying watermark stealing (WS) as a fundamental vulnerability of these schemes.
Mar 5
•
aipwn
Share this post
[paper]Watermark Stealing in Large Language Models
aipwn.org
Copy link
Facebook
Email
Note
Other
Hugging Face ML Models with Silent Backdoor
Recently, JFrog's security team discovered at least 100 instances of malicious artificial intelligence (AI) machine learning (ML) models on the Hugging…
Mar 1
•
aipwn
Share this post
Hugging Face ML Models with Silent Backdoor
aipwn.org
Copy link
Facebook
Email
Note
Other
February 2024
[paper] Generative AI Security: Challenges and Countermeasures
This paper delves into the unique security challenges posed by Generative AI, and outlines potential research directions for managing these risks.
Feb 22
•
aipwn
Share this post
[paper] Generative AI Security: Challenges and Countermeasures
aipwn.org
Copy link
Facebook
Email
Note
Other
OpenAI Bug Bounty
start hacking
Feb 21
•
aipwn
Share this post
OpenAI Bug Bounty
aipwn.org
Copy link
Facebook
Email
Note
Other
GOODY-2: What does a safety-first AI model look like
Introduction to GOODY-2 GOODY-2 introduces itself as the world's most responsible AI model, refusing to answer any questions that could be seen as…
Feb 12
•
aipwn
Share this post
GOODY-2: What does a safety-first AI model look like
aipwn.org
Copy link
Facebook
Email
Note
Other
August 2022
【AI安全周刊】2022年8月第二期
8月第二周 Adversarial Attacks on Image Generation With Made-Up Words arxiv.org AI在莫斯科国际象棋比赛压断对手手指 mp.weixin.qq.com 人工智能背景下全球关键信息基础设施安全挑战与对策 mp.weixin.qq.com…
Aug 12, 2022
•
aipwn
Share this post
【AI安全周刊】2022年8月第二期
aipwn.org
Copy link
Facebook
Email
Note
Other
【AI安全周刊】2022年8月第一期
8月第一周 防止特斯拉追尾,只需要一块红布? weibo.com 【Robustar:鲁棒视觉分类交… 阅读更多 »【AI安全周刊】2022年8月第一期
Aug 1, 2022
•
aipwn
Share this post
【AI安全周刊】2022年8月第一期
aipwn.org
Copy link
Facebook
Email
Note
Other
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts