Rough Work/the guardrails

Escape Output, Don't Sanitize Input

Data isn't dangerous on its own. A <script> tag sitting in a database row is inert — it's just text. The same tag dropped into a browser as raw HTML is an attack vector. The data didn't change. The context did. This distinction is the foundation of secure input handling.

Most developers learn to "sanitize" input — strip out HTML tags and special characters before storing them. The instinct makes sense: if dangerous content never enters the system, it can never cause harm. But sanitization solves the problem in the wrong place. What if the user is writing a tutorial about HTML? By stripping their tags, you've deleted the content they were trying to save. You've corrupted their data to prevent a risk that didn't exist yet.

The right place to handle context-dependent risk is at the point of output, where context actually exists:

const userInput = "<script>alert('xss')</script>";

// Unsafe: renders the script tag as executable HTML
container.innerHTML = userInput;

// Safe: displays it as text, never executes it
container.textContent = userInput;

Store the data exactly as the user gave it to you. Then encode it appropriately for wherever it's going — a template engine that escapes HTML entities for web pages, parameterized queries that neutralize SQL injection at the database layer. The encoding mechanism changes with the context. The principle doesn't.

Don't be afraid of the data. Be careful where you put it.

to navigate