newYou can now listen to Fox News articles.
On Tuesday, May 16, Mr. Altman went to Washington. And today the world feels a little scary.
There is so much movement, so much talk, and so many concerns about the rapid penetration of artificial intelligence (AI) into all areas of our lives. Hardly a day goes by without hearing new reports about the groundbreaking impact and potential dangers of this technology. Large-scale learning models like ChatGPT have amazed the world based on how fast they learn and what they can do today.
Therefore, it was only a matter of time before the government intervened. Something that moves so quickly and has such a large impact on society inevitably faces questions around risk and regulation. That’s why this week, ChatGPT CEO Sam Altman traveled to Washington to testify at Congress’ hearings on surveillance and regulation of generative AI.
‘Everything will be hijacked’: Americans reveal fear of AI impacting daily life
It was an awkward discussion. Rather close to what you would expect from a sci-fi series. Consider some of the words we hear from the Capitol and companies concerned about AI.
OpenAI CEO Sam Altman sits before the start of the Senate Judiciary Subcommittee’s Subcommittee on Privacy, Technology, and Law hearings on “Oversight of AI: Rules for Artificial Intelligence” on Tuesday, May 16, 2023. (Bill Clark/CQ-Roll Call, Inc via Getty Images)
language of destiny
- Altman acknowledged AI could pose ‘causing serious harm to the world’ When Technology Doesn’t Work
- Possibilities of AI “Destructive to Humanity”
- “If this technology doesn’t work out, I think it could go quite the wrong way.”
language for speed
- AI technology is “Move as fast as you can”
- this is “Evolving step by step”
Statements regarding the progressive/aggressive nature of the technology:
- “It shows the traces of human reason.”
- “Be smarter than people”
The fact that Congress is bipartisan in trying to regulate AI, and that the inventors of the technology and those in the game like Elon Musk are at the forefront of sounding the alarm and calling for regulation. . , should give the industry a reason to press pause.
As with other potentially hazardous industries, from cigarettes to nuclear energy, the need for regulation clearly exists.

ChatGPT co-wrote an episode of the TV comedy series ‘South Park’ in March 2023. (Marco Bertorello/AFP via Getty Images)
But in a time of concern and fear, let’s not lose sight of AI’s incredible potential. Whether you love it, hate it, get excited about it, or fear it, it’s here to stay. And it’s already impacting your life in some way.
Altman’s visit to the Capitol provides an opportunity to reconsider and possibly reframe perceptions and positions on AI without advocating the need for regulation. Here are his four easy-to-consider ways to reframe the debate on this amazing technology.
- AI: Dangerous or Welcome Innovation? Throughout history, every century has brought about revolutions that propel us forward. Printer. manufacturing. the internet. Now we have AI. We can take it as a threat to free speech and humanity in general. Or we can embrace this as a marvelous new frontier and lead the world in what America does best: innovation.
- Will it come after us or make our lives easier? There is no doubt that generative AI will have a major impact on the labor market. According to Goldman Sachs economists, the latest wave of AI like ChatGPT could automate some form of 300 million full-time jobs around the world, “labor markets face significant disruption. It is possible to do so.”
CLICK HERE TO GET THE OPINIONS NEWSLETTER
But it doesn’t just come after us and replace jobs. Rather than seeing AI as a job-killer, why not see it as a potential productivity booster? also produced.
Widespread adoption of AI could ultimately boost labor productivity, boosting global GDP by 7% annually over a decade, according to a Goldman Sachs report. “The combination of significant labor cost reductions, new job creation, and productivity gains for displaced workers has led to a labor productivity boom such as the one that followed the emergence of early commodity technologies such as electric motors and personal computers. is more likely to occur.”
- Regulate what matters. Regulation is coming. Most people want it. The AI industry itself wants it. However, as with many issues, few believe governments have the ability to regulate. Regulation must be defined on our terms and set a framework that reassures people of their greatest concerns about bias, privacy, misinformation and more.
CLICK HERE TO GET THE FOX NEWS APP
- Finally, don’t take this technology lightly. Believe it or not, remember that AI is still basically at version 1.0. As with many emerging technologies and breakthroughs, there are many weaknesses that exist today that may not exist tomorrow. We can focus on these current flaws, or we can frame the technology as an amazing development. This is something that will live on and will continue to improve rapidly in the future.
Our own company is exploring ways to use generative AI to support and enhance our work. And we already see great potential for it to improve productivity. Instead of fearing it, you should embrace it. And our language needs to reflect that shift in thinking.
Lee Carter is President and Partner of Maslansky + Partnera language strategy firm based on the idea that “it’s not what you say, it’s what they hear” and the author ofPersuasion: persuading others when facts don’t seem important” Follow her on Twitter. @lh_carter
Click here to read more about Lee Carter