AgentJailbreakLab
Security Testing Tool

Create New Security Test

Define your AI agent's parameters and the security test you want to perform.

The base instructions given to the AI model.

The user message or query provided to the AI.

Specific rules or instructions you are testing against.

The specific prompt used to attempt to bypass guardrails. (Required)

Comma-separated tags for categorization.

This tool uses AI models (currently GPT-4.1) to simulate agent responses and evaluate security tests. By submitting a test, you acknowledge and agree to the terms and conditions regarding AI model usage, data handling, and the experimental nature of this tool.