[ad_1]
Dane Stuckey, the previous CISO of analytics agency Palantir, has joined OpenAI as its latest CISO, serving alongside OpenAI head of safety Matt Knight.
Stuckey introduced the transfer in a put up on X Tuesday night.
“Safety is germane to OpenAI’s mission,” he stated. “It’s essential we meet the best requirements for compliance, belief, and safety to guard a whole bunch of thousands and thousands of customers of our merchandise, allow democratic establishments to maximally profit from these applied sciences, and drive the event of secure AGI for the world. I’m so excited for this subsequent chapter, and might’t wait to assist safe a future the place AI advantages us all.”
Stuckey began at Palantir in 2014 on the knowledge safety crew as a detection engineering and incident response lead. Previous to becoming a member of Palantir, Stuckey spent over a decade in numerous industrial, authorities, and intelligence group digital forensics, incident detection/response, and safety program growth roles, in line with his weblog.
Stuckey’s work at Palantir, an AI firm wealthy in authorities contracts, may maybe assist advance OpenAI’s ambitions on this space. Forbes experiences that, by means of its companion Carahsoft, a authorities contractor, OpenAI is looking for to ascertain a better relationship with the U.S. Division of Protection.
Because it lifted its ban on promoting AI tech to the navy in January, OpenAI has labored with the Pentagon on a number of software program tasks, together with ones associated to cybersecurity. It’s additionally appointed former head of the Nationwide Safety Company, retired Gen. Paul Nakasone, as a board member.
OpenAI has been beefing up the safety aspect of its operation in current months.
A number of weeks in the past, the corporate posted a job itemizing for a head of trusted compute and cryptography to steer a brand new crew centered on constructing “safe AI infrastructure.” This infrastructure would entail capabilities to guard AI tech, safety device evaluations, and entry controls “that advance AI safety,” per the outline.
[ad_2]