adversarial-robustness-toolbox

adversarial-robustness-toolbox

Trusted-AI

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

5.9k Stars
1.3k Forks
5.9k Watchers
Python Language
mit License
100 SrcLog Score
Cost to Build
$46.23M
Market Value
$252.69M

Growth over time

13 data points  ·  2021-08-01 → 2026-04-01
Stars Forks Watchers
💬

How do you feel about this project?

Ask AI about adversarial-robustness-toolbox

Question copied to clipboard

What is the Trusted-AI/adversarial-robustness-toolbox GitHub project? Description: "Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams". Written in Python. Explain what it does, its main use cases, key features, and who would benefit from using it.

Question is copied to clipboard — paste it after the AI opens.

How to clone adversarial-robustness-toolbox

Clone via HTTPS

git clone https://github.com/Trusted-AI/adversarial-robustness-toolbox.git

Clone via SSH

[email protected]:Trusted-AI/adversarial-robustness-toolbox.git

Download ZIP

Download master.zip

Found an issue?

Report bugs or request features on the adversarial-robustness-toolbox issue tracker:

Open GitHub Issues