Twitter founder Jack Dorsey announced the start of the project Responsible Machine Learning, which aims to analyze how the algorithms in use by the social network facilitate online prejudice. The intent is to understand the paths taken by Twitter machine learning that can lead, even unwittingly, to algorithmic bias, with a negative impact on users. The ultimate goal is to improve machine learning and avoid racial, gender or political bias.
Among the various steps that the initiative will follow is to understand how and when the Twitter algorithm prefers to display tweets with photos of white-skinned people rather than others in the message boards of users. An assumption noted long ago, starting from the previews of the photos automatically cropped from the social network, which excluded black people. The way in which the feed is presented to people according to their ethnicity and the recommendations on political content by origin are also in the sights of the analysts.
“Leading this work is the ML Ethics, Transparency and Accountability (META) team – explained Twitter – a dedicated group of engineers, researchers and data scientists who collaborate across the company to assess unintentional downstream or current damage in algorithms, so as to determine which priorities to address first “. Anticipating the initiative, in February Dorsey talked about the Twitter of the future, where users could choose for themselves which content to give more importance, managing the feed in a personalized way.