newYou can now listen to Fox News articles.

President Joe Biden’s Oct. 30 executive order on artificial intelligence is very broad, with an emphasis on safety and little emphasis on freedom.

These include personal privacy and copyright of training data, watermarking of AI content and deepfakes, fair civil rights, workforce replacement with AI technology, and use cases that could impact the military and critical infrastructure. Contains provisions regarding the protection of safety.

It also requires minimal transparency by requiring AI companies to share the results of Red Team safety testing of their platforms, ultimately subjecting them to standards in the National Institute of Standards and Technology’s untapped standards. There are also vague references to sex.

Biden wins big victory over China in military deal, experts say: ‘Incredibly poor decision’

Many other government agencies have roles in AI regulation and safety standards, including the Departments of Commerce, Treasury, Energy, Defense, and Homeland Security. In fact, there are only a few isolated institutions that have not been convened to directly participate in the regulation of AI.

Government regulations regarding AI transparency should also include content guidelines. (Getty Images)

The goal of ensuring the safe and non-military use of AI is laudable. What’s missing is real transparency around the training data and content restrictions that govern how these black-box AI systems generate content and whether “safe” standards include viewpoint neutrality.

Such transparency and viewpoint neutrality will be critical if one or a few AI platforms become a monopoly and potentially replace the current monopoly, the Google Search Platform. This ensures that news, opinion and academic debate are distributed freely and fairly, without censorship or bias from so-called authoritative sources promoting the discourse of the government of the day.

Currently, several well-known online social media, search, and video sharing platforms are already monopolistic, completely controlling 80-90% of content visibility and sharing.

These exclusive platforms have become the new town square for news, political opinion, and academic debate, as Americans now report getting most of their news from these online sources. If a content creator or news site doesn’t appear on the front page of Google search results, it’s going to disappear from the face of the earth in terms of reader traffic.

If ensuring both online safety and viewpoint neutrality is a bridge too far for a divided Congress to tackle, it could garner bipartisan support and have a powerful positive impact. A possible first step would be to mandate public transparency into the online content moderation rules of these proprietary platforms. . Achieving such transparency does not require companies to disclose trade secrets or core intellectual property, but it does focus on:

1. Publish online content moderation standards with examples so users can easily understand what content is allowed and what content will be moderated in some way.

2. In the case of an enforcement action, a description of what specific content violated the rules and which specific content violated the rules.

The goal of ensuring the safe and non-military use of AI is laudable. What’s missing is real transparency around the training data and content restrictions that govern how these black-box AI systems generate content and whether “safe” standards include viewpoint neutrality.

3. If third-party fact checkers are involved in an enforcement action, we will publish the name, credentials, funding sources, and history of each fact checker or fact checker organization.

For more FOX News opinions, click here

4. Prompt public reporting of communications with government entities, including entities receiving government funding and all employees and contractors of such entities. Official police or national security requirements are an exception.

5. If an AI platform approaches monopoly with a share of more than 50% AI content generation, as measured by active users or revenue, that platform will be required to use all relevant training data related to the content generated by that AI platform. Source and content rules must be published.

The crackdown will be treated similarly to a crackdown on false advertising, and companies with platforms that fail to publish content or comply with content moderation rules could be fined. But publicity brilliance may be more powerful and important in helping these companies publish and consistently and fairly follow reasonable content rules.

CLICK HERE TO GET THE FOX NEWS APP

These requirements or similar transparency rules have appeared separately in various proposals from both the Democratic and Republican sides of Congress.

Transparency alone will not solve all the challenges of online safety, viewpoint neutrality, and issues related to existing Section 230 legislation, but it is a strong start and will move the needle toward passage. is much more likely to garner bipartisan support.

Click here to read more from Michael Mathis



Source

Share.

TOPPIKR is a global news website that covers everything from current events, politics, entertainment, culture, tech, science, and healthcare.

Leave A Reply

Exit mobile version