In the preceding week, we observed the AI Act (AIA) delineating regulations for AI systems according to their classification. This article aims to delve into the realm of General Purpose AI models (GPAI). The Regulation entails distinct guidelines for GPAI and those deemed to harbor systemic risks, which should be applicable even when these models are integrated into or constitute part of an AI system. It is imperative to grasp that responsibilities for providers of GPAI come into effect upon the introduction of these models to the market.

The responsibilities envisioned for models should not, under any circumstances, be applicable when an internal model is exclusively utilised for non-essential internal processes that do not impact the provision of a product or service to third parties, and do not infringe upon the rights of individuals. At the same time, given their potential for substantial adverse impacts, general-purpose AI models posing systemic risks must consistently adhere to the pertinent obligations outlined in this Regulation. GPAI models are defined as those based on a general purpose AI model, having the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems (Article 3.44e). If a general-purpose AI model is incorporated into or becomes a component of an AI system, that system should be classified as a general-purpose AI system if, as a result of this integration, it gains the ability to fulfil various functions. Large generative AI models serve as a prime example of general-purpose AI models.

AIA distinguishes between GPAI models and GPAI models posing a systemic risk. A systemic risk encompasses, but is not limited to, potential negative impacts such as major accidents, critical sector disruptions, public health and safety hazards, threats to democratic processes, and dissemination of illegal or discriminatory content. GPAI models are deemed to carry systemic risk if they either: (a) possess high-impact capabilities assessed through appropriate technical tools and methodologies, including indicators and benchmarks; or (b) are determined by the Commission, either ex officio or in response to a qualified alert from the scientific panel, to have capabilities or impacts equivalent to those specified.

Chapter 2 delineates the standards that GPAI models must follow. GPAI models presenting systemic risks must adhere to both the regulations specified in Chapter 2 and those outlined in Chapter 3. Moreover, GPAI models typically must also conform to the transparency criteria outlined in Title IV, such as the preparation and public dissemination of a sufficiently comprehensive summary detailing the content utilised for training the general-purpose model.

Unveiling the AI Act - written by Maria Mot