姿花直播官方版_姿花直播平台官方app下载_姿花直播免费版app下载

position: EnglishChannel  > Insight> Regulating AI Should be Joint Endeavor

Regulating AI Should be Joint Endeavor

Source: | 2024-06-12 09:52:45 | Author:


PHOTO:?XINHUA

By?Staff?Reporters

Regulations on AI are now firmly on the global agenda. On May 21, 16 companies from different countries and regions, such as reputable AI-tech giants OpenAI, Google, Amazon and Microsoft, signed the Frontier AI Safety Commitments at the AI Seoul Summit. Combined with this is the EU AI Act, approved on the same day.

Since the end of 2022, access to ChatGPT has increased public awareness of the growing risks surrounding advanced AI systems, which may challenge many norms of human life.

Generally speaking, AI runs the risk of perpetuating discrimination, and distributing misinformation, as well as exposing sensitive personal information as indicated by an event which shows that a training data extraction process conducted by GPT-2 provided personally identifiable information, including phone numbers and email addresses, which were published online.

Large language models faithfully mirror language found in the training data. This training data may come from various sources such as online books or internet forums.

The incapacity to speak like humans, who usually take into account multiple societal-personal factors such as listeners' emotional responses, social impact, and ideological attitudes, may lead to aggravated discrimination against certain groups of people. This is sufficient to surmise that the quality of AI training data cannot be fully guaranteed.

AI-induced misinformation may get worse. Unlike discriminative attitudes, which in some cases are developed unconsciously over time by assimilating information without careful reflection, misinformation is false information that is spread, regardless of whether there is intent to mislead, eroding societal trust in shared information.

As for regulations, from a government perspective, they need to establish agile AI regulatory agencies and provide them with adequate funding. The annual budget for the AI Safety Research Institute in the United States is currently $10 million, which may sound substantial, but actually pales in comparison to the $6.7 billion budget of the U.S. Food and Drug Administration.

Countries across the globe should now establish laws not only based on their existing legal systems, but also seek viable approaches to cooperation. The AI field requires stricter risk assessments and the implementation of enforceable measures, rather than relying on vague model evaluations. Meanwhile, AI development companies should be required to prioritize safety and demonstrate that their systems will not cause harm. In a nutshell, AI developers must take responsibility for ensuring the safety of their technologies.

The international community has now decided that comprehensive regulation of AI needs to be accelerated. It needs to be a united effort, and every company involved in AI development must commit to upholding these standards to protect societal values and trust. Only through vigilant regulation and cooperation can we harness the full potential of AI while mitigating its risks.

Editor:林雨晨

Top News

China's Economic Growth Catalyzed by Sci-tech

In 2024, China's economy has shown resilience and maintained stable growth despite the complicated environment, and science, technology and innovation have played a crucial role in this growth.

AI 3D Platform Empowers Urban Planning, Architecture Design

Chinese tech company SenseTime presented real scenes of historical and cultural sites and ancient buildings in 3D through the use of SenseSpace, a 3D scene generation platform.

抱歉,您使用的瀏覽器版本過低或開啟了瀏覽器兼容模式,這會影響您正常瀏覽本網(wǎng)頁

您可以進(jìn)行以下操作:

1.將瀏覽器切換回極速模式

2.點(diǎn)擊下面圖標(biāo)升級或更換您的瀏覽器

3.暫不升級,繼續(xù)瀏覽

繼續(xù)瀏覽