AIPwn
Subscribe
Sign in
Home
AIBounty
HackingLLM
HackingNeuralNetworks
Archive
About
Latest
Top
[paper] MARKLLM: An Open-Source Toolkit for LLM Watermarking
we introduce MarkLLM, an open-source toolkit for LLM watermarking
Jun 2
•
aipwn
Share this post
[paper] MARKLLM: An Open-Source Toolkit for LLM Watermarking
aipwn.org
Copy link
Facebook
Email
Note
Other
April 2024
[paper]LLM4Decompile: Decompiling Binary Code with Large Language Models
Large language models (LLMs) show promise for programming tasks, motivating their application to decompilation
Apr 16
•
aipwn
Share this post
[paper]LLM4Decompile: Decompiling Binary Code with Large Language Models
aipwn.org
Copy link
Facebook
Email
Note
Other
March 2024
[paper]Logits of API-Protected LLMs Leak Proprietary Information
Potential Information Leakage in API-Protected LLMs
Mar 18
•
aipwn
Share this post
[paper]Logits of API-Protected LLMs Leak Proprietary Information
aipwn.org
Copy link
Facebook
Email
Note
Other
[paper] ImgTrojan: Jailbreaking Vision-Language Models with ONE Image
"ImgTrojan: Jailbreaking Vision-Language Models with ONE Image," the introduction of a novel attack mechanism against Vision-Language Models (VLMs) is…
Mar 14
•
aipwn
Share this post
[paper] ImgTrojan: Jailbreaking Vision-Language Models with ONE Image
aipwn.org
Copy link
Facebook
Email
Note
Other
OpenAI Introduces Multi-Factor Authentication for AI Conversations
Is your OpenAI account safer now?
Mar 12
•
aipwn
1
Share this post
OpenAI Introduces Multi-Factor Authentication for AI Conversations
aipwn.org
Copy link
Facebook
Email
Note
Other
[paper] Here Comes The AI Worm: Unleashing Zero-click Worms that Target GenAI-Powered Applications
ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications
Mar 11
•
aipwn
Share this post
[paper] Here Comes The AI Worm: Unleashing Zero-click Worms that Target GenAI-Powered Applications
aipwn.org
Copy link
Facebook
Email
Note
Other
A Safe Harbor for Independent AI Evaluation
We make AI safer
Mar 5
•
aipwn
Share this post
A Safe Harbor for Independent AI Evaluation
aipwn.org
Copy link
Facebook
Email
Note
Other
[paper]Watermark Stealing in Large Language Models
In this paper, identifying watermark stealing (WS) as a fundamental vulnerability of these schemes.
Mar 5
•
aipwn
Share this post
[paper]Watermark Stealing in Large Language Models
aipwn.org
Copy link
Facebook
Email
Note
Other
Hugging Face ML Models with Silent Backdoor
Recently, JFrog's security team discovered at least 100 instances of malicious artificial intelligence (AI) machine learning (ML) models on the Hugging…
Mar 1
•
aipwn
Share this post
Hugging Face ML Models with Silent Backdoor
aipwn.org
Copy link
Facebook
Email
Note
Other
February 2024
[paper] Generative AI Security: Challenges and Countermeasures
This paper delves into the unique security challenges posed by Generative AI, and outlines potential research directions for managing these risks.
Feb 22
•
aipwn
Share this post
[paper] Generative AI Security: Challenges and Countermeasures
aipwn.org
Copy link
Facebook
Email
Note
Other
OpenAI Bug Bounty
start hacking
Feb 21
•
aipwn
Share this post
OpenAI Bug Bounty
aipwn.org
Copy link
Facebook
Email
Note
Other
GOODY-2: What does a safety-first AI model look like
Introduction to GOODY-2
Feb 12
•
aipwn
Share this post
GOODY-2: What does a safety-first AI model look like
aipwn.org
Copy link
Facebook
Email
Note
Other
Share
Copy link
Facebook
Email
Note
Other
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts