But instead of developing their systems in such a way that the associated risks remain manageable, they encourage debate about potential dangers. As if they could evade responsibility by pointing out undesirable side effects. What a cheap trick that is becomes apparent when those who warn on the one hand do not want to offer their AI systems in the EU on the other hand because they are allegedly threatened with overly restrictive regulation. The fact is: The EU wants to establish legal responsibility for damage caused by the use of algorithms. But that’s exactly what AI developers like Altman seem to shy away from. They prefer to leave it at cheap warnings.
also read the interview with the Aachen professor Holger Hoos: “With AI, we also have to consider the horror scenarios”