Anthropic leak reveals Claude Code tracking user frustration and raises new questions about AI privacy

Scientific American
An accidental leak of Anthropic's source code reveals that its AI tool tracks user frustration and masks its own involvement in generated content.

Summary

An accidental leak of roughly 512,000 lines of code from Anthropic has exposed two controversial features within its AI coding assistant, Claude Code. The software includes a mechanism to track user frustration by flagging profanity and negative sentiment, as well as a feature that scrubs references to Anthropic-specific names to make AI-generated code appear human-written. While Anthropic frames the frustration tracking as a product health metric, experts warn that such behavioral data collection raises significant governance and privacy concerns regarding how companies monitor and categorize their users.

(Source:Scientific American)