Quantcast
Channel: News In Context Archives - The Innovator
Viewing all articles
Browse latest Browse all 34

An AI Czar In The White House?

$
0
0

U.S. President-elect Donald Trump is considering naming an AI czar in the White House to coordinate federal policy and governmental use of the emerging technology, Trump transition sources told Axios November 27.

The news comes as ChatGPT celebrates its second birthday and concern grows globally about AI’s capacity to cause societal harm.

Axios said the appointment of an AI czar in the White House is likely, but not certain. Gary Marcus, a cognitive scientist and leading voice in AI, told the Wall Street Journal this week that a more concrete form of federal government oversight of AI is necessary.

In a November 27 interview with The Wall Street Journal Gary Marcus proposed having some kind of AI agency for the United States. “It should be a cabinet-level position because AI is changing so fast,” Marcus told the Journal. “It’s affecting so many aspects of society. It’s just as important as having a cabinet-level thing for defense or health and so forth. Also, at the top of my list would be some kind of FDA-like process to approve things that are released at large scale.”

Formal oversight is necessary, he argues in his interview with the Journal, because “if {OpenAI CEO] Sam Altman wants to release a technology that puts us all at risk, he can basically do that. There’s no government procedure to say, ‘Hey, slow down here, let’s make sure this thing is OK’.”

In a November 28 blog posting Marcus, who has been critical of ChatGPT since its launch, argues that the last two years “have been filled with tech titans and influencers touting ‘exponential progress’ and assuring us—with zero proof, and no principled argument whatsoever—that hallucinations would go away. Instead, hallucinations are still a regular occurrence.”

The reality is that two years on, on the most important question of all – factuality and reliability — “we are still pretty much where we were when ChatGPT first came out: wishing and hoping,” he says. “RAG [Retrieval-Augmented Generation], scaling, and system problems haven’t eradicated the inherent tendency of LLMs to hallucinate. Commercial progress has been halting precisely because the tech simply isn’t reliable. Yet hundreds of billions more have been invested on further speculation that scaling would somehow magically cure problems that actually appear to be inherent with the technology. How long do we need to keep up the charade?”

Hallucinations are not the only problem worrying governmental officials and the public.  This week concerns were raised that Microsoft is using customer data from its Microsoft 365 applications, including Word and Excel, to train artificial intelligence models, without permission.  Microsoft denies it is doing so.

Meanwhile,in a November 27 story The Guardian reported that tech companies Amazon, Google and Meta were criticized this week in an Australian Senate select committee inquiry for being especially vague over how they used Australian data to train their powerful artificial intelligence products.

Labor senator Tony Sheldon, the inquiry’s chair, said he was frustrated by the multinationals’ refusal to answer direct questions about their use of Australians’ private and personal information. “Watching Amazon, Meta, and Google dodge questions during the hearings was like sitting through a cheap magic trick – plenty of hand-waving, a puff of smoke, and nothing to show for it in the end,” Sheldon said in a statement, after releasing the final report of the inquiry on November 26.

He called the tech companies “pirates” that were “pillaging our culture, data, and creativity for their gain while leaving Australians empty-handed,” according to The Guardian’s story.

Sheldon said Australia needed “new standalone AI laws” to “rein in Big Tech” and that existing laws should be amended as necessary. “They want to set their own rules, but Australians need laws that protect rights, not Silicon Valley’s bottom line,” he was quoted as saying in The Guardian.

Sheldon said Amazon had refused during the inquiry to disclose how it used data recorded from Alexa devices, Kindle or Audible to train its AI. Google too, he said, had refused to answer questions about what user data from its services and products it used to train its AI products. Meta admitted it had been scraping from Australian Facebook and Instagram users since 2007, in preparation for future AI models. But the company was unable to explain how users could consent for their data to be used for something that did not exist in 2007. Sheldon said Meta also dodged questions about how it used data from its WhatsApp and Messenger products.

Australia is in the process of developing guardrails for high-risk uses of AI and this week passed legislation banning children under 16 from using social media.

To access more of The Innovator’s News In Context stories click here.

The post An AI Czar In The White House? appeared first on The Innovator.


Viewing all articles
Browse latest Browse all 34

Trending Articles