Less than two years after Google dismissed two researchers who criticised the biases built into artificial intelligence systems, the company has fired a researcher who questioned a paper it published on the abilities of a specialized type of artificial intelligence used in making computer chips.
The researcher, Satrajit Chatterjee, led a team of scientists in challenging the celebrated research paper, which appeared last year in the scientific journal Nature and said computers were able to design certain parts of a computer chip faster and better than humans.
Chatterjee, 43, was fired in March, shortly after Google told his team that it would not publish a paper that rebutted some of the claims made in Nature, said four people familiar with the situation who were not permitted to speak openly on the matter. Google confirmed in a written statement that Chatterjee had been “terminated with cause.”
Google declined to elaborate about Chatterjee’s dismissal, but it offered a full-throated defense of the research he criticised and of its unwillingness to publish his assessment.
“We thoroughly vetted the original Nature paper and stand by the peer-reviewed results,” Zoubin Ghahramani, a vice president at Google Research, said in a written statement. “We also rigorously investigated the technical claims of a subsequent submission, and it did not meet our standards for publication.”
Chatterjee’s dismissal was the latest example of discord in and around Google Brain, an AI research group considered to be a key to the company’s future. After spending billions of dollars to hire top researchers and create new kinds of computer automation, Google has struggled with a wide variety of complaints about how it builds, uses and portrays those technologies.
Tension among Google’s AI researchers reflects much larger struggles across the tech industry, which faces myriad questions over new AI technologies and the thorny social issues that have entangled these technologies and the people who build them.
The recent dispute also follows a familiar pattern of dismissals and dueling claims of wrongdoing among Google’s AI researchers, a growing concern for a company that has bet its future on infusing artificial intelligence into everything it does. Sundar Pichai, CEO of Google’s parent company, Alphabet, has compared AI to the arrival of electricity or fire, calling it one of humankind’s most important endeavors.
Google Brain started as a side project more than a decade ago when a group of researchers built a system that learned to recognize cats in YouTube videos. Google executives were so taken with the prospect that machines could learn skills on their own, they rapidly expanded the lab, establishing a foundation for remaking the company with this new artificial intelligence. The research group became a symbol of the company’s grandest ambitions.
But even as Google has promoted the technology’s potential, it has encountered resistance from employees about its application. In 2018, Google employees protested a contract with the Department of Defense, concerned that the company’s AI could end up killing people. Google eventually pulled out of the project.
In December 2020, Google fired one of the leaders of its Ethical AI team, Timnit Gebru, after she criticized the company’s approach to minority hiring and pushed to publish a research paper that pointed out flaws in a new type of AI system for learning languages.
Before she was fired, Gebru was seeking permission to publish a research paper about how AI-based language systems, including technology built by Google, may end up using the biased and hateful language they learn from text in books and on websites. Gebru said she had grown exasperated over Google’s response to such complaints, including its refusal to publish the paper.
A few months later, the company fired the other head of the team, Margaret Mitchell, who publicly denounced Google’s handling of the situation with Gebru. The company said Mitchell had violated its code of conduct.
The paper in Nature, published last June, promoted a technology called reinforcement learning, which the paper said could improve the design of computer chips. The technology was hailed as a breakthrough for artificial intelligence and a vast improvement to existing approaches to chip design. Google said it used this technique to develop its own chips for artificial intelligence computing.
Google had been working on applying the machine learning technique to chip design for years, and it published a similar paper a year earlier. Around that time, Google asked Chatterjee, who has a doctorate in computer science from the University of California, Berkeley, and had worked as a research scientist at Intel, to see if the approach could be sold or licensed to a chip design company, the people familiar with the matter said.
But Chatterjee expressed reservations in an internal email about some of the paper’s claims and questioned whether the technology had been rigorously tested, three of the people said.
While the debate about that research continued, Google pitched another paper to Nature. For the submission, Google made some adjustments to the earlier paper and removed the names of two authors, who had worked closely with Chatterjee and had also expressed concerns about the paper’s main claims, the people said.
When the newer paper was published, some Google researchers were surprised. They believed it had not followed a publishing approval process that Jeff Dean, the company’s senior vice president who oversees most of its AI efforts, said was necessary in the aftermath of Gebru’s firing, the people said.
Google and one of the paper’s two lead authors, Anna Goldie, who wrote it with a fellow computer scientist, Azalia Mirhoseini, said the changes from the earlier paper did not require the full approval process. Google allowed Chatterjee and a handful of internal and external researchers to work on a paper that challenged some of its claims.
The team submitted the rebuttal paper to a resolution committee for publication approval. Months later, the paper was rejected.
The researchers who worked on the rebuttal paper said they wanted to escalate the issue to Pichai and Alphabet’s board of directors. They argued that Google’s decision to not publish the rebuttal violated its own AI principles, including upholding high standards of scientific excellence. Soon after, Chatterjee was informed that he was no longer an employee, the people said.
Goldie said Chatterjee had asked to manage their project in 2019 and that they had declined. When he later criticized it, she said, he could not substantiate his complaints and ignored the evidence they presented in response.
“Sat Chatterjee has waged a campaign of misinformation against me and Azalia for over two years now,” Goldie said in a written statement.
She said the work had been peer-reviewed by Nature, one of the most prestigious scientific publications. And she added that Google had used their methods to build new chips and that these chips were currently used in Google’s computer data centers.
Laurie M. Burgess, Chatterjee’s attorney, said it was disappointing that “certain authors of the Nature paper are trying to shut down scientific discussion by defaming and attacking Dr. Chatterjee for simply seeking scientific transparency.” Burgess also questioned the leadership of Dean, who was one of 20 co-authors of the Nature paper.
“Jeff Dean’s actions to repress the release of all relevant experimental data, not just data that supports his favored hypothesis, should be deeply troubling both to the scientific community and the broader community that consumes Google services and products,” Burgess said.
Dean did not respond to a request for comment.
After the rebuttal paper was shared with academics and other experts outside Google, the controversy spread throughout the global community of researchers who specialize in chip design.
The chipmaker Nvidia says it has used methods for chip design that are similar to Google’s, but some experts are unsure what Google’s research means for the larger tech industry.
“If this is really working well, it would be a really great thing,” said Jens Lienig, a professor at the Dresden University of Technology in Germany, referring to the AI technology described in Google’s paper. “But it is not clear if it is working.”
This article originally appeared in The New York Times.