The United Nations’ human rights chief sounded an alarm on Saturday, cautioning that the progression of artificial intelligence poses a severe danger to human rights. According to the official’s suggestion, governments must implement protective measures to avoid infringements.
Over 60 countries, including China and the United States, urged the regulation of artificial intelligence in defense. The collective goal is to protect global stability, security, and accountability against potential threats.
Rising concerns center on the use of AI-controlled drones known as ‘slaughterbots.’ These drones, which operate without human intervention, are believed to be dangerous. The UN feels they have the potential for artificial intelligence to amplify military conflicts.
Volker Turk, the UN High Commissioner for Human Rights, expressed deep concern about AI in the military. He acknowledged that the potentially harmful effects of recent developments in artificial intelligence are high.
Volker Turk emphasized that human dignity, agency, and all human rights face significant risks, requiring an urgent appeal to businesses and governments. He affirmed that the government must swiftly establish effective safety measures to avert these risks.
Privacy Concerns and Biased Algorithms
Frankly, artificial intelligence has transformed our daily lives, revolutionizing internet searches and changing how we monitor our health. It has also given rise to innovations, like an app that can generate various types of written content quickly and easily.
As such, critics have raised various concerns, including privacy concerns and biased algorithms. The tension between the benefits of AI and potential threats to human rights has become more pronounced in privacy. With vast amounts of personal data collected, with or without our knowledge, critics believe we risk being profiled or having our behaviour predicted. We continuously feed machines data without knowing who will use it, for what reason, and with what motive.
Similarly, evidence shows that people from vulnerable communities increasingly face discrimination due to biased algorithms. Since machines rely on human input, conscious or subconscious biases can affect their outputs. Hence, AI systems should prioritize diversity and inclusivity to detect and prevent discrimination and prejudice.
Featured image from mixed-news.com