For many individuals, the primary job posting they noticed was Job Posting X, which was essentially the most uncommon and disturbing. Some individuals thought it was a joke. Some individuals have been reminded of 1 expertise firm, Cyberdyne Techniques. terminator The franchise that unintentionally invented Skynet.
This Tweet is currently unavailable. It may be loading or it may have been deleted.
However on LinkedIn, which speaks a distinct language, Anthropic is solely Policy Manager, Chemical Weapons and High Yield Explosives. Added particulars to job description.
“This function supplies a singular alternative to find out how AI programs deal with delicate chemical and explosive data,” it reads. “Working with main AI security researchers, we’re tackling vital points that forestall catastrophic misuse. If you’re keen on leveraging our experience to make sure AI programs are protected and useful, we’d love to listen to from you.”
Mashable reached out to Anthropic, and the corporate supplied additional background.
“Our use coverage prohibits using Anthropic’s services and products within the improvement or design of weapons,” an organization spokesperson mentioned. “This function belongs to our safeguards workforce, which is accountable for stopping abuse of our mannequin.”
mashable gentle velocity
The spokesperson emphasised that Anthropic particularly prohibits using its AI or its expertise in weapons manufacturing. As a substitute, the New York-based supervisor will probably be tasked with constructing and implementing safeguards to make sure the weapons are saved safely. wouldn’t have Constituted of Anthropic expertise.
The corporate is seeking to rent specialists in delicate fields who can guarantee Anthropic’s AI stays out of the improper palms, a spokesperson mentioned.
Anthropic’s Claude overtakes ChatGPT in App Retailer
Anthropic has just lately discovered itself in a really public battle with the Division of the Military (aka the Division of Protection). The corporate says it stays steadfast in its calls for that its AI not be used to create totally autonomous weapons or set up mass surveillance of individuals.
Secretary of Protection Pete Hegseth responded to Anthropic’s phrases by declaring the corporate a provide chain threat to U.S. nationwide safety and barring the Division of Protection from utilizing the corporate’s expertise after a six-month phase-out. At the moment, the corporate filed a lawsuit In response to a March 5 memo from Anthropic CEO Dario Amodei.
In the meantime, some individuals on the Division of Protection It’s hard to abandon ClaudeAnthropic’s AI mannequin.
Again in February, Anthropic Announced updates to AI safety policyoften known as accountable scaling coverage. The corporate mentioned it was compelled to rethink its security insurance policies. Strongest in the industry — As a result of a number of components, together with the federal authorities’s emphasis on financial development over security regulation.
So whoever takes on the function of coverage supervisor will discover themselves on the middle of a heated debate. Evidently, a doubtlessly helpful means forestall Future Skynet Threats.

