“We believe we should not rush this work at the expense of doing it right,” the six members of Congress, including the House Science Committee chairman, wrote. frank lucas (Republican, Oklahoma), Ranking Member Zoe Lofgren (D-Calif.) and key subcommittee leaders.
NIST is a low-profile agency within the Commerce Department that is central to President Joe Biden’s AI plan.White House orders NIST to establish AI Safety Institute among them
October Executive Order on AI, and agency
Released an influential framework Helping organizations manage AI risks earlier this year.
But NIST is also notoriously under-resourced and will almost certainly need help from outside researchers to fulfill its growing AI mission.
NIST has not publicly announced which organizations it plans to award research grants to through its AI Safety Institute, and House Science’s letter does not name the organizations in question. But according to one AI researcher and another, one of them is RAND. He is a professional in AI policy for major technology companies and is familiar with the circumstances of each company.
Recent RAND Reports Research into biosecurity risks posed by advanced AI models is listed in a footnote to the House letter as an example of worrisome research that has not undergone academic peer review.
A RAND spokesperson did not respond to questions about the company’s partnership with NIST on AI safety research.
Lucas spokeswoman Heather Vaughn said that on Nov. 2, three days after Biden signed the AI executive order, committee staff received a letter from NIST officials saying the agency had no apparent competition, public postings or disclosures. He said he was told that he intended to award research grants on AI safety to two outside organizations without any notice. Notification of funding opportunities. He did not mention these plans at a Nov. 17 NIST hearing to discuss the AI Safety Institute or at a Dec. 11 briefing for Congressional staff, prompting lawmakers to He said he had grown concerned.
Vaughan would not confirm or deny that RAND is one of the organizations mentioned by the committee, and is another organization that NIST has told committee staff it plans to partner with on AI safety research. It was not specified. A spokeswoman for Lofgren declined to comment.
RAND’s initial partnership with NIST stemmed from work on Biden’s AI executive order, which was written with extensive input from senior RAND staff.The venerable think tank is under increasing scrutiny, including from within, for receiving more than $15 million in funding.
With AI and
biosecurity grants Earlier this year, it received funding from Open Philanthropy, a prolific funder of effective altruistic work backed by billionaire Facebook co-founder and Asana CEO Dustin Moskowitz .
Many AI and biosecurity researchers, including RAND CEO Jason Matheny and senior information scientist Jeff Alstott, believe that talented altruists are concerned about the potentially devastating effects of AI and biotechnology. It says that the risks are overemphasized. Researchers say these risks are largely unsupported by evidence, and the movement’s ties to top AI companies could neutralize a company’s competitors or distract regulators from existing AI harms. It warns that this suggests efforts to
“A lot of people wonder, ‘RAND is opening up so much, how can we still be there?’ [Philanthropy] get money and get it [U.S. government] Need money now to do this? ” said the AI policy expert, who was granted anonymity due to the sensitivity of the topic.
In a letter, the congressmen warned NIST that “scientific merit and transparency must remain paramount considerations” and that NIST “receives federal research funding for AI safety research.” “We expect them to adhere to the same rigorous scientific and methodological guidelines.” It is a quality that characterizes the broad federal research enterprise. ”
A NIST spokesperson said the agency is “exploring options for a competitive process to support collaborative research opportunities” related to the AI Safety Institute, adding, “No decisions have been made yet. ” he added.
The spokesperson did not say whether NIST officials told House science staff during a Nov. 2 briefing that they intended to partner with RAND University on AI safety research. A spokesperson said NIST “maintains scientific independence in all its activities” and “executes its policies.” [AI executive order] We fulfill our responsibilities in an open and transparent manner. ”
Both AI researchers and AI policy experts agree that the think tank is affiliated with Open Philanthropy, and given the growing focus on AI’s existential risks, members and staff on the House Science Committee believe that NIST is RAND He said he was concerned about the choice to partner with.
“The House Science Committee is truly dedicated to measurement science,” said an AI policy expert. “and [the existential risk community] Does not meet measurement science. There are no benchmarks they use. ”
Raman Chaudhry, an AI researcher and co-founder of the tech nonprofit Humane Intelligence, said the committee’s letter “understands how important measurement is when Congress decides how to regulate AI.” This suggests that people are beginning to realize that “is this true?”
“There is not only AI hype, but also AI governance hype,” Chaudhry wrote in an email. He said the House letter shows that Capitol Hill is becoming aware of “ideological and political perspectives framed in scientific language aimed at capturing how ‘AI governance’ is being defined.” He said that it suggests. For measurements and descriptions based on what we determine to be most important. “