AIPwn
Subscribe
Sign in
Home
AIBounty
HackingLLM
HackingNeuralNetworks
Archive
About
Latest
Top
[paper] Prompt Injection 2.0 — The Hybrid AI Threat
What it is — and why it matters now
Sep 2
•
aipwn
August 2025
[paper] Hacking the Hive Mind: How Multi-Agent LLMs Get Jailbroken
New research shows optimized prompt attacks can outsmart defenses like Llama-Guard
Aug 26
•
aipwn
April 2025
I embarked on my AI Bounty journey
On April 1, 2025
Apr 1
•
aipwn
November 2024
Black Friday Special: AIPwn Newsletter - Your Gateway to AI Security
Best Subscription Opportunity of the Year!
Nov 28, 2024
•
aipwn
June 2024
[paper] MARKLLM: An Open-Source Toolkit for LLM Watermarking
we introduce MarkLLM, an open-source toolkit for LLM watermarking
Jun 2, 2024
•
aipwn
April 2024
[paper]LLM4Decompile: Decompiling Binary Code with Large Language Models
Large language models (LLMs) show promise for programming tasks, motivating their application to decompilation
Apr 16, 2024
•
aipwn
March 2024
[paper]Logits of API-Protected LLMs Leak Proprietary Information
Potential Information Leakage in API-Protected LLMs
Mar 18, 2024
•
aipwn
[paper] ImgTrojan: Jailbreaking Vision-Language Models with ONE Image
"ImgTrojan: Jailbreaking Vision-Language Models with ONE Image," the introduction of a novel attack mechanism against Vision-Language Models (VLMs) is…
Mar 14, 2024
•
aipwn
OpenAI Introduces Multi-Factor Authentication for AI Conversations
Is your OpenAI account safer now?
Mar 12, 2024
•
aipwn
1
[paper] Here Comes The AI Worm: Unleashing Zero-click Worms that Target GenAI-Powered Applications
ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications
Mar 11, 2024
•
aipwn
A Safe Harbor for Independent AI Evaluation
We make AI safer
Mar 5, 2024
•
aipwn
[paper]Watermark Stealing in Large Language Models
In this paper, identifying watermark stealing (WS) as a fundamental vulnerability of these schemes.
Mar 5, 2024
•
aipwn
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts