This AI Expert Reveals The One SEO Tactic You Must Avoid
By 813 Staff

A prominent AI ethicist publicly questioned the integrity of a common industry practice, a major tech publication then amplified the critique, and now a leaked internal memo from a top-tier AI lab has just dropped, confirming that the controversy has reached the highest levels of Silicon Valley. The practice in question is pSEO, or "prompt search engine optimization," a technique where developers systematically engineer the text prompts used to train or query AI models to achieve more favorable or higher-ranking outputs. The firestorm began when Machina (@EXM7777), a respected voice in AI development circles, posted a succinct but damning critique on March 12th, stating plainly that the technique is something one should "NEVER" be doing. While the tweet lacked specifics, its source gave it immediate weight within insider channels.
Internal documents from Anthropic, obtained by 813 Morning Brief, show that the company’s technical leadership has been debating the ethical and technical ramifications of pSEO for months. The documents reveal concerns that optimizing training prompts for performance benchmarks can create a "Potemkin model"—one that scores well on standardized tests but whose underlying reasoning and safety alignments are brittle or misrepresented. Engineers close to the project say the pressure to top public leaderboards, especially for models like Claude, has created internal tension between teams focused on pure capability metrics and those responsible for long-term reliability and safety. The memo, circulated last week, mandates a formal review of all benchmark evaluation methodologies and a temporary halt on using certain aggressive prompt-optimization strategies in published research.
Why does this arcane technical debate matter? Because it strikes at the heart of trust in the AI ecosystem. If leading labs are effectively "teaching to the test" through prompt engineering, the published results that investors, customers, and regulators rely on become misleading. It creates an arms race where genuine model improvements are obscured by clever prompt hacking, making it nearly impossible to assess true progress or risks. For businesses integrating these APIs, it could mean a model that performed flawlessly in a vendor's demo may behave unpredictably on real-world, unoptimized tasks.
What happens next is a period of uncomfortable scrutiny. Other labs, including OpenAI and Google DeepMind, are now expected to clarify their own stances on pSEO practices. The rollout of any new industry standard for evaluations, however, has been anything but smooth in the past and will likely face resistance from teams with vested interests in maintaining top benchmark positions. The uncertainty lies in whether this internal memo leads to genuine transparency or simply drives these optimization techniques further underground. Machina’s tweet didn’t just start a conversation; it forced a clandestine industry practice into the light, and now every major player is checking their own hands.

