Skip to content
LLM Guard
API Reference
Initializing search
protectai/llm-guard
Get Started
Tutorials
Input Scanners
Output Scanners
API
Changelog
Customization
LLM Guard
protectai/llm-guard
Get Started
Get Started
Index
Installation
Quickstart
Playground
Attacks
Best Practices
Tutorials
Tutorials
Optimization
Loading models from disk
OpenAI SDK
Across 100+ LLMs (LiteLLM)
Langсhain
Retrieval-augmented Generation
Secure RAG with Langchain
Secure RAG with LLamaIndex
Secure Agents with Langchain
Invisible Prompt Test in OpenAI GPT-4
Input Scanners
Input Scanners
Anonymize
BanCode
Ban Competitors
Ban Substrings
Ban Topics
Code
Gibberish
Invisible Text
Language
Prompt Injection
Regex
Secrets
Sentiment
Token Limit
Toxicity
Output Scanners
Output Scanners
Ban Competitors
Ban Substrings
Ban Topics
Bias
Code
Deanonymize
JSON
Language
Language Same
Malicious URLs
No Refusal
Reading Time
Factual Consistency
Gibberish
Regex
Relevance
Sensitive
Sentiment
Toxicity
URL Reachability
API
API
Overview
Deployment
API Reference
Client
Changelog
Customization
Customization
Add scanner
API Reference
Back to top