It is a timid attempt aimed at attenuating – rather than completely reversing – the burden of proof weighing on the plaintiff: and, moreover, one cannot ignore the opposing interests at stake, on the one hand that of consumer safety and, on the other hand, that of the innovator who, from a regime of indiscriminate objective responsibility, would see his own innovative drive paralyzed.
Going forward, a compulsory insurance scheme can be envisaged for high-risk AI systems accompanied by a risk fund for AI victims: which would guarantee the consumer (at least in terms of the effectiveness of the refreshment) without paralyzing the inventive ability.
The virtuous effects of a correct jurisdiction
It is estimated that the two measures proposed by the directive can generate an increase in the AI market between 500 million euros to over one billion because, guaranteeing access to an efficient judicial system, it would increase citizens’ trust in these intelligence systems. The problem is that the time needed to legislate is too long.
At present we are in the presence of a mere proposal for a directive: once approved, it will take 2 years to implement it. Times irreconcilable with innovation. Rather, it could be useful to resort (in parallel) to a soft-law system: self-regulation codes, for example, which maturing spontaneously in the innovation ecosystem (already cross-border by nature) would be able to code in real time best- consistent and targeted practices, also in terms of conventional allocation of responsibility among the various components of the value-chain.
Obviously the limit would lie in the covenant nature of the regulation: but this would also be its strength. In addition, in fact, to the innate slenderness and concreteness, self-regulation has the unsurpassed advantage of being born from below, where the needs are felt, and without political mediation.