More on French and Belgian GDPR guidance on AI training
As well as the Belgian Data Protection Authority decision I criticised earlier this week, it appears the French DPA has issued similar guidance on the use of personal data to train AI models. My detailed analysis below shows that, in relation to purpose-specific AI systems, it makes no sense: the training of the system cannot be separated from the ultimate purpose of the system. This has a major bearing on the issue of compatibility.
As a matter of principle and law, the creation and training of AI models/profiles for a specific purpose (be that direct marketing or health care) must be based on the legal basis relied on for that ultimate purpose.
The fact that the creation and training of the models/profiles is a “first phase” in a two-phase process (with the deployment of the models/profiles forming the “second phase”) does not alter that.
However, as an exception to this, under the GDPR, the processing can also be authorised by law or by means of an authorisation issued by a DPA under the relevant law (as in France), provided the law or DPA authorisation lays down appropriate safeguards. That is the only qualification I accept to the above principle.
The creation and training of General-Purpose AI systems and models, which by definition are not developed for any pre-specified purpose but for use for a wide range of purposes, arguably breaches the purpose-specification principle set out in Article 5(1)(b) GDPR. They are in my opinion best suited for (strict) regulation by law or, as in France, by DPA authorisations, laying down appropriate safeguards.
And any deployment of a GPAI system for a specific purpose should still be subject to a data protection impact assessment (DPIA).