Mrinank Sharma, a world leading AI safety researcher, resigned from Anthropic in February 2026. Sharma wrote on AI safeguards and spoke out loudly whenever he spoke about responsible technology. His plan to leave corporate work on AI and get back into poetry and public reflection is drawing widespread debate in the technology industry.
Called Anthropic, Sharma was the leader of the Safeguards Research Team. His focus on essential areas included jailbreak defenses, automated red teaming, misuse monitoring, and AI behavior analysis. These developments were to ensure that more mature AI technology systems would be safer and more reliable. Anthropic, the name of whose Claude AI has long been synonymous with responsible AI development, has set itself as a leader, and Sharma’s role was central to this determination.
Sharma confirmed his resignation in a social media post on February 9, 2026. Not only was AI a threat, it posed a more general threat to humanity in his message when he warned that the “world is in peril.” Rather than stick with another tech company, he said he'd spend his time writing poetry and giving brave speeches, prioritizing a life of creativity and reflection rather than research in a corporation. Many who find AI development to be relentless pressure that is emotionally taxing have appreciated his words.
Sharma graduated with a DPhil in Machine Learning from Oxford, Master of Engineering in Machine Learning from Cambridge. As a result of his education and professional background, he was one of the most respected voices of AI safety. His departure shows all the personal and ideological dilemmas confronting top researchers at the bleeding edge of technology.
Sharma’s exit has raised important questions about the future of AI safety. His decision is viewed by many as a reminder that innovation should be combined with ethics and human dignity. His departure from corporate AI work is being viewed as a challenge to further consider how technology is impacting society and humanity.
Informat Sharma’s resignation is really more than a career-altering choice. It is a statement about the human side of technology. By turning towards poetry and public thought and meditation he is putting every algorithm (and every protection system he has put down in its wake) to the people of a society dealing with responsibility and emotion. His story might inspire other people to look at AI not just as a technical challenge or technological problem, but also as a cultural and moral one in its own right.