Site icon Todaynewslife

ARTIFICIAL INTELLIGENCE (AI): REGULATION AND APPLICATION

3d rendering humanoid robot with ai text in ciucuit pattern

ARTIFICIAL INTELLIGENCE (AI): REGULATION AND APPLICATION

Recently, the World’s first-ever AI Safety Summit was held at Bletchley Park in Buckinghamshire near London (United Kingdom).

• 27 major countries including the United States, China, Japan, UK, France, and India, and the European Union agreed to sign a declaration, named the Bletchley Declaration.
• The Declaration fulfils key summit objectives in establishing shared agreement and responsibility on the risks, opportunities and a forward process for international collaboration on frontier AI safety and research.

source: internet

What are the risks associated with AI development that necessitate its regulation?

Control of Big Tech: Decisions about the development of AI are overwhelmingly in the hands of the big tech companies with access to vast stores of digital data and immense computing power.

Misuse: Substantial risks may arise from potential intentional misuse or unintended issues of control relating to
alignment with human intent.
o Frontier AI systems may amplify risks such as disinformation through the use of algorithms.
o Increasing instances of deepfakes, intentional sharing of harmful information and cyber frauds are examples of
it. E.g., instances observed in elections across the world.

Model Collapse scenario: Over time, datasets may be poisoned by AI-generated content which changes the patterns in the dataset, incorporating mistakes of previous AI models. E.g., issues of racial discrimination experienced in previous AI models.

Model adoption challenges: There are risks associated with different models for AI development.
o Closed: An ecosystem limited to a small number of closed models and private organizations can prevent misuse by malicious actors but has the potential for safety failures and undetected biases to propagate.
o Open-source: On the other hand, an open-source model can spot biases, risks or faults but increases the risk of misuse by malicious actors.

Cyber risks: Global tensions and the rise in cyber capabilities have led to escalating cyber crime or hacking incidents and consequent disruption of public services.

Economic risks: The effects of AI in the economy, such as labour market displacement or the automation of financial markets, could cause social and geopolitical instability.

image source: internet

What has been done to regulate AI?

European Union: EU’s AI Act intends to be the world’s first comprehensive AI law.
o It classifies AI systems into four tiers of risk and different tiers are subject to different regulations.
o A new EU AI office would be created to monitor enforcement and penalties including fines of up to 6% of total
worldwide revenue.

USA: Regulation to set standards on security and privacy protections and builds on voluntary commitments adopted by more than a dozen companies.

India: Government of India is contemplating to bring out a comprehensive Digital India Act to regulate AI.
o NITI Aayog released the National Strategy on Artificial Intelligence (NSAI) which focuses on Responsible AI for
All (RAI) principles.

China: China’s regulations require an advanced review of algorithms by the state and should adhere to the core
socialist values.
o AI-generated content must be properly labelled and respect rules on data privacy and intellectual property.

What can be done to better regulate AI systems?

International Cooperation: Since many challenges posed by AI regulation cannot be addressed at a purely domestic level, international cooperation is urgently needed to establish basic global standards.

Impact assessment: International efforts to examine and address the potential impact of AI systems is needed.

Proportionate Governance : Countries should consider the importance of a pro-innovation and proportionate
governance and regulatory approach that maximises the benefits and takes into account the risks associated with AI.

Private sector accountability: Increased transparency by private actors developing frontier AI capabilities,
appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific
research.
Better Design: To reduce degree and impact of bias and harmful responses, there is a need for curated, fine-tuned datasets with inclusion of more diverse groups and continuous feedback mechanism.

Some Key application Areas of AI

 

Principle for responsible AI management

 

 

Read Article: India complete 40 years Presence on Antarctica

Exit mobile version