Artificial Intelligence and Legal Analysis
Summary
The article introduces 'contractual steganography,' a novel risk where deliberately crafted legal language, appearing normal to humans, systematically distorts AI analysis via known LLM vulnerabilities like prompt injection and semantic priming. This manipulation exploits the fundamental difference between human contextual understanding and an LLM's statistical pattern-matching in vector space. Four techniques are detailed: positive association priming, authority markers, embedded prompt structures, and cognitive anchoring. The author illustrates how saturating text with statistically positive phrases (e.g., "reasonable commercial outcome") nudges the model toward favorable interpretations, effectively creating two different readings of the same document—one for the lawyer and one for the machine. Current defenses are largely ineffective because manipulative phrases are often legitimate in normal use. This asymmetry favors sophisticated actors, potentially undermining the democratizing promise of legal tech. The author suggests that legal doctrine must adapt, perhaps by recognizing 'technical unconscionability,' and recommends immediate defensive actions like multi-model redundancy and heightened human vigilance.
(Source:Kancelaria Prawna Skarbiec)