Home » Artificial intelligence, Immanence is born: the company that teaches algorithms ethics

Artificial intelligence, Immanence is born: the company that teaches algorithms ethics

by admin
Artificial intelligence, Immanence is born: the company that teaches algorithms ethics

In the coming years, artificial intelligence will spread to almost all production sectors, but its mistake could cost a lot in terms of investments, revenues and customers. For this reason, Immanence was born in Milan, one of the first companies in Europe to set itself a task: to help companies train their algorithms so that they do not cause damage. Immanence, founded by the Italians Diletta Huyskes and Luna Bianchi, will offer ethical, social and legal advice to companies and public bodies to support them in assessing the risks and impacts of their software and their artificial intelligences while respecting fundamental rights and the environment.

The need to have Ai more ethical is deeply felt. According to a survey sponsored by DataRobot, leaders of technology companies are increasingly concerned about the potential damage of poorly trained artificial intelligence. 62% of those who experienced negative impacts from AI lost revenue and 61%. The percentage of respondents (350 CIOs and other IT leaders) who are very or extremely concerned about AI bias rose to 54% from 42% in 2019.

But how can artificial intelligence harm the company that adopts it? Errors and lack of control of the algorithms that underlie AI can, for example, amplify social discrimination. When this happens and becomes public knowledge, the company involved loses in terms of image, turnover and customers. It will therefore become essential to have algorithms that are respectful and compliant with the future European regulation on artificial intelligence which will see the light in the coming months.

See also  Superbonus, Leo (Treasury): "Hypothesis fund to support the poor". Companies: "From the 55 billion deductions, value of almost 80 billion euros has been activated"

But how do you train an AI?

Here too the question is not easy to resolve. «Ethics, unlike machines, is not binary, nor unique and global – says Diletta Huyskes – It must be negotiated, adapted to the context, made to evolve together with the values ​​of society». This is why Immanence adopts an approach that gives relevance to the context as opposed to the checklist approach used by the big names in consultancy.

Diletta Huyskes is a Ph.D. researcher and technology ethicist, Luna Bianchi is a jurist, manager and intellectual property expert. You immanence then you make use of consultants in various areas to carry out individual assessments: human rights, fairness and algorithmic explainability, privacy by design, data science, administrative law and cybersecurity.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy