NIST sets up new task force on AI and national security

NIST sets up new task force on AI and national security

The National Institute of Standards and Technology has set up a new task force within its existing Artificial Intelligence Safety Institute focusing on evaluating the myriad security implications of artificial intelligence models with inter-agency participation.

Dubbed the Testing Risks of AI for National Security Taskforce, or TRAINS, the group consists of members from the Department of Defense—including its Chief Digital and Artificial Intelligence Office and the National Security Agency—the Department of Energy and its national labs; the Department of Homeland Security and the Cybersecurity and Infrastructure Security Agency; and the National Institutes of Health within the Department of Health and Human Services. 

Members from these entities will assist in measuring and evaluating AI models based on their areas of expertise, including national security, radiological and nuclear security, cybersecurity, critical infrastructure and more.

“Enabling safe, secure and trustworthy AI innovation is not just an economic priority — it’s a public safety and national security imperative,” U.S. Secretary of Commerce Gina Raimondo said in a press release. “The U.S. AI Safety Institute will continue to lead by centralizing the top-notch national security and AI expertise that exists across government in order to harness the benefits of AI for the betterment of the American people and American business.”

The establishment of the new task force follows uncertainty ahead of the incoming Trump administration and its plans for AI policy. The AISI was stood up as part of President Joe Biden’s landmark executive order on AI and established a consortium of academic, industry and civic partners to encourage responsible AI policy development and usage.

With incoming President Donald Trump’s aim to repeal Biden’s executive order, doubt surrounds the future of the AISI and other federal AI policy initiatives. 



Read the full article here