España - Marcar España

Incoterms:DDP
Todos los precios incluyen impuestos y tasas de aduanas en los métodos de envío seleccionados.

Por favor, confirme su moneda:

Euros
Envío de pedido gratuito en la mayoría de pedidos superiores a 50 € (EUR)

Dólares estadounidenses
Envío de pedido gratuito en la mayoría de pedidos superiores a $60 (USD)

Bench Talk for Design Engineers

Bench Talk

rss

Bench Talk for Design Engineers | The Official Blog of Mouser Electronics


Regulating and Innovating with the AI Act Carolyn Mathas

(Source: kiatipol/stock.adobe.com; generated with AI)

Artificial Intelligence (AI) is expanding at a high rate, and while its applications may provide amazing opportunities and breakthroughs, the potential for AI to be misused remains a global concern. In response, the European Union (EU) recently introduced the AI Act, which is the first legal framework on AI to address the risks of AI globally and set out rules, requirements, obligations, rights, safety regulations, and ethics for its use.

The AI Act is one of three policy measures supporting AI development in the EU. The remaining two, the AI Innovation Package and the Coordinated Plan on AI, round out responsible AI development. At the heart of the legislation is risk management.

Four Levels of Risk

As shown in Figure 1, the new regulatory framework defines four levels of risk for AI systems: unacceptable risk, high risk, limited risk, and minimal risk.[1]

Figure 1: The four levels of risk as defined by the European Commission. (Source: European Union, licensed under CC BY 4.0

At the top of the risk chain are unacceptable risks that violate fundamental rights. Such actions include social scoring, individual predictive policing, using subliminal technology, and untargeted scraping of the internet or CCTV for facial images to expand databases. Other actions, such as law enforcement's use of real-time remote biometric ID in publicly accessible spaces and biometric profiling are mostly prohibited but granted narrow exceptions that are strictly defined and regulated.

The high-risk category includes critical infrastructures such as water, electricity, and road traffic; exam scoring; remote biometric identification; and CV-sorting software used for recruitment. This category also includes migration, asylum, and border control management and notes the potential for law enforcement to interfere with individual rights. These systems are subject to strict obligations before they can be released.

Limited risk refers to the dangers of a lack of transparency in AI usage. The AI Act includes transparency obligations to ensure humans are informed. This category contains chatbots, AI-generated content, AI-generated text, and audio and video content that constitute deep fakes. These must be labeled explicitly so users know they are dealing with AI. The act provides for free use of minimal-risk AI, including AI-enabled video games and spam filters.

Developers of free and open-source models are exempt from most of the obligations, except for providers of general-purpose AI models with systemic risk. Also exempt are research, development, and prototyping activities necessary to release technology to market and those exclusively for military, defense, and national security. Ultimately, the AI Act is in place to ban potential offenders and create trust in AI technology, which fosters innovation.

Regulation and Innovation: Finding a Balance

The EU AI Act is designed to increase trust and provide legal certainty so that companies and governments can use AI more. To do this, the act establishes legal certainty and provides consistent and enforceable rules so that AI providers can access bigger markets and create products that users and consumers want.

The AI Act also establishes regulatory sandboxes, enabling real-world testing of technologies and products that have yet to fully comply with the legal and regulatory environment. This allows companies to develop products and services under regulatory supervision. A major category for this is medical devices and services.  

Additionally, the strategy behind the AI Act is to make the EU a world-class hub for AI and ensure that AI is human-centric and trustworthy. The European Commission has a comprehensive strategy to promote the development and adoption of AI in Europe, enabling conditions for growth and use while building strategic leadership in high-impact sectors. Towards this end, the EU AI Act provides comprehensive rules, penalties for infringement, and concepts for futureproofing AI.

Penalties for Infringement

The onus for establishing penalties will be left to member states to create effective and appropriate penalties, including fines, and then to communicate each infringement back to the commission.[2] The fines can be steep, for example:

  • Up to €35 million or 7 percent of the total worldwide annual turnover of the preceding financial year for infringements on prohibited practices or non-compliance related to data requirements
  • Up to €15 million or 3 percent of the total worldwide annual turnover of the preceding financial year for non-compliance with any of the other requirements or obligations of the regulation, including the rules on general-purpose AI models
  • Up to €7.5 million or 1.5 percent of the total worldwide annual turnover of the preceding financial year for the supply of incorrect, incomplete, or misleading information in response to a request

For each infringement category, the threshold is the lower of the two amounts for small- and medium-sized enterprises and the higher for other companies. According to the act, EU institutions, agencies, or bodies are subject to rules and possible penalties, and the European Data Protection Supervisor can impose fines on them for infringement.

A Work in Progress

The EU AI Act takes a future-proof approach to the evolution of AI. For example, general-purpose AI models trained using total computing power of more than 1025 floating point operations per second (FLOPS) are identified as having systemic risks.[3] The AI Office of the European Commission may eventually decide to update that threshold to meet technological advances. By Q2 2025, the AI Office and AI Board are obligated to facilitate the development of codes of practice covering general-purpose AI obligations and watermarking techniques that label content as artificially generated. These codes must have specific, measurable objectives, including key performance indicators based on the differences in size and capacity of various providers. The AI Office will also create a forum for cooperation with the open-source community to develop best practices.

By the end of 2030, the obligations stipulated in the AI Act will take effect for AI systems that comprise the large-scale IT systems established by EU law in freedom, security, and justice.

Expectations are that the work will be ongoing and variable for the long term. Given that AI is anything but static, risk regulation may struggle to be at least one step ahead.

   

Sources

[1] https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

[2]https://artificialintelligenceact.eu/article/99/

[3] https://ec.europa.eu/commission/presscorner/detail/en/QANDA_21_1683



« Back


Carolyn Mathas is a freelance writer/site editor for United Business Media’s EDN and EE Times, IHS 360, and AspenCore, as well as individual companies. Mathas was Director of Marketing for Securealink and Micrium, Inc., and provided public relations, marketing and writing services to Philips, Altera, Boulder Creek Engineering and Lucent Technologies. She holds an MBA from New York Institute of Technology and a BS in Marketing from University of Phoenix.


All Authors

Mostrar más Mostrar más
Ver mensajes por fecha

Archivos