Bypass LLM's guardrails with logical prompts – no coding

by rhsxandros | View on Hacker News