Harnessed responsibly, artificial intelligence (AI) has the power to transform our world in positive ways researchers have only just contemplated—helping us to cure diseases, heal our planet, answer major questions about our universe, speed up development for fusion energy, and accelerate manufacturing of components and systems for DOE’s nuclear security mission. But AI also poses significant risks that need to be managed and mitigated—risks to privacy, jobs, and national security.
President Biden on Monday signed an executive order, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” outlining the Biden-Harris Administration’s path forward to ensure the United States is at the forefront of responsible AI innovation and discovery. The Executive Order is the most significant action any government has ever taken on AI.
The U.S. Department of Energy (DOE) and its National Laboratories have invested in AI development and use since the early 1960s, developing cutting-edge AI tools, along with data science, high-performance computing, and more—for both open science and classified needs—and are ready to scale up efforts to meet this critical moment in history and secure U.S. leadership.
With experience driving progress in this strategically important and highly competitive technological space, the Department’s unmatched scientific and technical workforce will play significant roles in carrying out the President’s Executive Order, which will establish new standards for AI safety and security; promote equality, innovation, and competition; and protect Americans’ privacy and jobs.
“There are certain things that we know AI is going to be great for,” U.S. Secretary of Energy Jennifer M. Granholm said at a meeting of the Secretary of Energy Advisory Board on Thursday. “And, hopefully, a lot of machine learning will help to make work better for workers. But we want to make sure even as we bring all these supply chains home and all of these factories that we’re thoughtful about how AI is going to affect the life of workers across the country.”
In the Executive Order
“My Administration places the highest urgency on governing the development and use of AI safely and responsibly, and is therefore advancing a coordinated, Federal Government-wide approach to doing so,” President Biden wrote in the executive order. “The rapid speed at which AI capabilities are advancing compels the United States to lead in this moment for the sake of our security, economy, and society.”
DOE will play a crucial in the government’s approach to both the development and use of AI, and the executive order included several initiatives that will be led by DOE:
- Developing tools to understand and mitigate the risks of AI: DOE will develop tools to evaluate AI capabilities to generate outputs that may represent nuclear, nonproliferation, biological, chemical, critical infrastructure, and energy-security threats or hazards as well as model guardrails that reduce those risks.
- Collaborating with other agencies, the private sector, and academia: DOE will work with partners inside and outside the government by utilizing the Department’s computing capabilities and AI testbeds to build foundation models that support new applications in science and energy and for national security.
- Training new researchers: To meeting the rising demand for AI talent. DOE will work with the National Science Foundation (NSF) to establish a pilot program to enhance training programs for scientists, with the goal of training 500 new researchers by 2025.
- Coordinating AI efforts across DOE: The Department will set up an office that will coordinate development of AI and other critical and emerging technologies across its programs and 17 National Laboratories.
DOE will also serve as a partner to other agencies across the Federal Government on:
- Reducing risks at the intersection of AI and chemical, biological, radiological, and nuclear (CBRN) threats: DOE and the White House Office of Science and Technology Policy, in consultation with the Department of Homeland Security, will evaluate the potential for AI to be misused to enable the development or production of CBRN threats. DOE also will help DHS evaluate AI model capabilities to generate and guard against CBRN threats.
- Developing guidelines, standards, and best practices for AI safety and security: The Department of Commerce, in coordination with DOE and NSF, will develop guidelines and best practices for developing and deploying safe, secure, and trustworthy AI Systems, to ensure the availability of testing environments such as testbeds to support the development of safe, secure, and trustworthy AI technologies as well as to support the design, development, and deployment of privacy-enhancing technologies (PETs).
- Setting technical conditions for models and computing clusters subject to reporting: The Department of Commerce, in coordination with the Department of State, Department of Defense, the Office of the Director of National Intelligence, and DOE will define reporting thresholds for AI models and computing clusters to ensure and verify the continuous availability of safe, reliable, and effective AI.
- Managing sensitive data used to train AI for malicious uses: The Federal Chief Data Officer Council will work with DOE, the Department of Commerce, the Department of Defense, and the Office of the Director of National Intelligence to develop guidelines for performing security reviews that could aid in the development of CBRN weapons as well as the development of autonomous offensive cyber capabilities.
- Protecting privacy: DOE will work with NSF to support the creation of a research coordination network dedicated to advancing privacy research and scaling PETs.
There are several other recent and ongoing AI initiatives across the Department, including:
Supporting the National AI Research Resource with DOE’s Summit Supercomputer
The Department’s world-leading supercomputers, cutting-edge algorithms and software stacks, and large high-quality scientific datasets are a tremendous asset for AI exploration. To provide computational resources to the research community in support of President Biden’s Executive Order, the Department will extend operations of Oak Ridge National Laboratory’s supercomputer, Summit, through October 2024.
Summit debuted five years ago as the world’s fastest supercomputer and has since been surpassed by Oak Ridge National Laboratory’s newest supercomputer, Frontier. Still, Summit remains a powerful and reliable instrument for scientific discoveries in AI, energy, climate, health, and other areas with a direct impact on national security and global welfare. This extension will enable researchers to pursue projects on one of the world’s leading AI-enabled open science supercomputing platforms. DOE will make Summit available to the National AI Research Resource pilot program through a special allocation program.
El Capitan will be the newest and fastest AI-capable exascale computer when it comes online in mid-2024. Funded by and for the nuclear security mission, it will be sited at Lawrence Livermore National Laboratory and primarily used for the National Nuclear Security Administration’s Stockpile Stewardship program.
Trustworthy and Responsible AI
DOE is also expanding its efforts in trustworthy and responsible AI research. Users of Frontier and Summit can take advantage of a feature unique to the supercomputers at Oak Ridge: the CITADEL framework. This infrastructure provides resources and protocols that enable researchers to safely and securely process protected data at scale, including data containing protected health information, personally identifiable information, data protected under export-controlled information regulations, and other types of data that require privacy.
Expanding DOE’s Biopreparedness Research Virtual Environment
DOE is expanding its research efforts under its Biopreparedness Research Virtual Environment (BRaVE) program with a new project that will leverage generative AI to create a high-quality set of synthetic pathology reports.
BRaVE was launched to leverage the highly successful framework of the National Virtual Biotechnology Laboratory, which the Department stood up in 2020 to use the capabilities and expertise across DOE’s 17 National Laboratories to address key technical issues in the fight against COVID-19. BRaVE broadened the scope of the NVBL to provide new capabilities for biopreparedness, and this new synthetic data will serve as a training benchmark for AI models devoted to detecting cancer and other diseases and open up a new research resource for testing federated learning AI techniques.
This effort builds on a longstanding partnership between DOE, the National Institutes of Health, the U.S. Department of Veterans Affairs, and the Centers for Disease Control and Prevention.
Laying out DOE’s Vision for AI in Science, Energy, and Security
Earlier this year, DOE released a report, “Advanced Research Directions on AI for Science, Energy, and Security,” describing how unique DOE capabilities can enable the community to drive progress in scientific use of AI, building on DOE strengths and investments in computation, data, and communications infrastructure.
The report lays out a vision for DOE to leverage and expand new capabilities in AI to accelerate progress and identifies the pressing need for scientific grounding in areas such as bias, transparency, security, and validation. DOE’s investments in exascale systems, infrastructure, software, theory, and application will contribute to continued U.S. leadership in the development of safe and secure AI.
Originally published at https://www.energy.gov/articles/innovation-safety-and-security-doe-leads-ai