Archive4

Inside MCP: A Protocol for AI Integration
A hands-on exploration of Model Context Protocol - the standard that connects AI systems with real-world tools and data.

The Invisible Threat: How Zero-Width Unicode Characters Can Silently Backdoor Your AI-Generated Code
Explore how invisible Unicode characters can be used to manipulate AI coding assistants and LLMs, potentially leading to security vulnerabilities in your code..

OWASP Red Teaming: A Practical Guide to Getting Started
OWASP released the first official red teaming guide for AI systems.

Misinformation in LLMs: Causes and Prevention Strategies
LLMs can spread false information at scale.

Sensitive Information Disclosure in LLMs: Privacy and Compliance in Generative AI
LLMs can leak training data, PII, and corporate secrets.

Understanding AI Agent Security
AI agents are powerful but vulnerable.

What are the Security Risks of Deploying DeepSeek-R1?
Our red team analysis found DeepSeek-R1 fails 60%+ of harmful content tests.

1,156 Questions Censored by DeepSeek
Analysis of DeepSeek-R1 censorship using 1,156 political prompts exposing CCP content filtering and bias detection patterns.

How to Red Team a LangChain Application: Complete Security Testing Guide
LangChain apps combine multiple AI components, creating complex attack surfaces.

Defending Against Data Poisoning Attacks on LLMs: A Comprehensive Guide
Data poisoning attacks can corrupt LLMs during training, fine-tuning, and RAG retrieval.