More and more companies are leveraging artificial intelligence (AI) in their daily operations. Many technologies help increase productivity and maintain public safety. But some industries are pushing back against certain aspects of AI. And some industry leaders are working to balance the good with the bad.

“We’re looking at owners and operators of critical infrastructure, water, health care, transportation, and communications companies, some of whom are implementing these AI capabilities,” said Jen Easterly, director of the U.S. Cybersecurity and Infrastructure Security Agency. Some are even starting to integrate departments.” “We want to make sure we’re integrating them in a way that doesn’t introduce a lot of new risks.”

US agriculture industry tests artificial intelligence: ‘Many possibilities’

Consulting firm Deloitte recently conducted a survey of leaders of business organizations around the world. The survey results showed that uncertainty about government regulation is a bigger problem than the actual implementation of AI technology. When asked about the biggest barrier to adopting AI tools, 36% cited regulatory compliance first, 30% cited difficulty managing risk, and 29% cited lack of a governance model.

Easterly said he is not surprised that despite some of the risks that AI can pose, governments have not taken further steps to regulate the technology.

“These will be the most powerful technologies of this century, perhaps even more so,” Easterly said. “Most of these technologies are built by private companies that are incentivized to provide profits to their shareholders, so there is a need for security to ensure that these technologies are built in a security-first way. “And that’s where Congress has a role to play in ensuring that these technologies are used and implemented safely and reliably by the American people.” I think it can be done. ”

More and more companies are utilizing AI in their daily operations. (St. Petersburg)

While Congress has considered comprehensive protections for AI, it has primarily been up to state governments to set the rules.

“There are certainly a lot of good things about how AI works, but it also has the potential to be destroyed in the wrong hands.” [the music] ” Governor Bill Lee (R-Tennessee) said in March when he signed a state law protecting musicians from AI.

The Ensuring Likeness Voice and Image Security Act (ELVIS Act) classifies voice likeness as a property right. Lee signed the bill into law this year, making Tennessee the first state to enact protections for singers. Illinois and California have since passed similar laws. Other states, including Tennessee, have laws that provide that your name, photograph, and likeness are also considered property rights.

“Our voices and likenesses are not just digital blobs for machines to reproduce without our consent; they are It is an indelible part of who we are that has allowed us to showcase our talents and grow our audience.” property.

AI horror flick star Katherine Waterston admits new technology is ‘scary’

Wilson claimed that her image and likeness was used through AI to sell products she did not previously support.

“For decades, we’ve been using technology that frankly wasn’t built for safety. It was built for speed to market and cool features. And frankly, that’s why we have cybersecurity,” Easterly said.

The Federal Trade Commission (FTC) has cracked down on some deceptive AI marketing practices. In September, the company launched Operation AI Comply, which uses AI to combat unfair and deceptive business practices such as fake reviews by chatbots.

“I’m a technologist at heart and an optimist at heart, so I’m very excited about some of these features, and I’m not worried about some of the Skynet stuff. “We want to make sure this technology works,” Easterly said. “It’s designed, developed, tested, and delivered with security as a priority.”

Close-up of ChatGPT artificial intelligence chatbot app logo icon on mobile phone screen. (St. Petersburg)

Chatbots have some good reviews. This year, the state of Hawaii approved legislation to further invest in research using AI tools in healthcare. In one study, an OpenAI chatbot outperformed doctors in diagnosing medical conditions. The experiment compared doctors using ChatGPT to those using traditional resources. The accuracy for both groups was around 75%, while the chatbot alone scored over 90%.

Not only is AI being used to detect diseases, but it can also help emergency personnel detect catastrophic events. After deadly wildfires devastated Maui, the Hawaii State Legislature also allocated funding to the University of Hawaii to map wildfire risk across the state and improve prediction technology. That includes a $1 million AI-driven platform. Hawaiian Electric is also deploying high-definition cameras throughout the state.

AI detects breast cancer in a woman that was missed during regular checkups: “We are very grateful”

“We’re going to learn over years and months to become more sensitive to what is a fire and what is not a fire,” said Dimitri Kusnezov, the Energy Department’s deputy secretary for AI and technology. said.

California and Colorado have similar technology. Within minutes, AI detects when a fire starts and where it is likely to spread.

AI is also being used to keep students safe. Several school districts across the country now have firearm detection systems in place. In one state, Utah, authorities are notified within seconds of a potential gun entering a school.

“We want to create a safe and engaging educational environment, but we don’t want security to affect education,” said Michael Tanner, chief executive officer of the Park City School District in Utah. (CEO) said.

Search and rescue workers work in a fire-ravaged area in Lahania, Hawaii, on August 18, 2023. (Matt McClain/The Washington Post via Getty Images)

Maryland and Massachusetts are also considering state funding to deploy similar technology. Both states voted to create a commission to study emerging firearms technologies. A Maryland state commission will decide whether to use school construction funds to build the system. Massachusetts members consider risks associated with new technologies.

“We want to leverage these capabilities to better protect the critical infrastructure that Americans depend on every hour of every day,” Easterly said.

The European Union passed regulations on AI this year. This ranks risks from unregulated and minimal to prohibited and unacceptable. Chatbots are classified as having a certain transparency and must notify the user that they are interacting with a machine. Software for critical infrastructure is considered high risk and must comply with strict requirements. Most technologies that create personal profiles or use public images to build databases are considered unacceptable.

CLICK HERE TO GET THE FOX NEWS APP

The United States has some guidelines on the use and implementation of AI, but experts say they don’t think the EU will go as far as classifying risks.

“To ensure we win this race for artificial intelligence, we need to dominate in America, and that requires investment and innovation,” Easterly said. “We must become the engine of innovation that makes America the greatest economy on earth.”

Share.

TOPPIKR is a global news website that covers everything from current events, politics, entertainment, culture, tech, science, and healthcare.

Leave A Reply

Exit mobile version