Artificial intelligence (AI), a machine-learning technology, can help protect from privacy intrusions and other obstructions. These technological advances can expand, intensify and encourage interference with the right to privacy, through an increased collection and use of personal data.
The UN Human Rights Office report that AI’s effect on the right to privacy and other rights shows how corporations rush to incorporate AI technology into our lives and fail to hold onto due diligence. The report continues and says there have been many cases of people being treated unlawfully due to AI misuses, such as flawed facial recognition software and faulty AI tools. The data collected in these systems can be faulty, discriminatory, out of date, or irrelevant.
Further investigation into AI’s involvement in health, education, housing, and financial services should be taken. Human rights guidance is urgently needed in other areas such as biometric technologies, international organizations, and tech companies.
Facial Recognition Technology in Violation of Peoples’ Civil Rights
Facial Recognition Technology uses a photo of a person’s face and runs it through a database that contains pictures, names and other records of someone who is already in the database to see if there is a match. Biometric data along with other information is used to provide precise and accurate results about a person and their behavior.
Face recognition gives governments, companies and individuals the ability to spy on people wherever they go by tracking faces at protests, stores, places of worship and more. This technology could worsen systemic racism by impacting people of color, who already experience discrimination and violations of their human rights. The black community is the most at risk of being misidentified by facial recognition systems.
Bias in Artificial Intelligence and Machine Learning Developed and Used by the Health Sector
Growing concerns about equity, fairness and bias are emerging as the use of machine learning models are used to make clinical and business decisions in health care as it continue to expand. Another concern when dealing with AI in the healthcare system is the risk of flawed and controversial decision making caused by human bias input in AI.
These human biases occur when groups are discriminated against, whether it is based on racism, sexism, assumptions, stereotypes or misinformation towards a particular section of the population. In a pain management study, black patients were 40% and Hispanic patients were 25% less likely to receive pain medication in an ER compared to white patients.
It’s important for investors to address bias with AI in the healthcare system. To create more equitable healthcare solutions, there needs to be better structure to balance the risks and benefits that comes with AI technology, plus communication between data scientists, healthcare providers, consumers and management to address complicated issues.
Climate Lobbying by Big Tech
The top five U.S. tech companies—Apple, Microsoft, Facebook, Amazon and Alphabet (Google)—are helping push the market for renewable energy, clean vehicles, and other technologies that aim to cut their carbon footprint. These tech companies’ influence on climate action could become even more powerful if their lobbying investments would reflect their concern for global climate change.
Amazon has eyes on being net zero by 2040 and plans on powering their operations with 100% renewable energy by 2025. By 2030, Facebook also plans to reach net zero emission for its entire supply chain. 2020 brought a promise from Microsoft to become carbon negative by 2030 and remove all carbon the company has emitted by 2050. By 2030, Apple has stated they will become carbon neutral across its entire supply chain. Lastly, Google promises their operations will be powered with 100% carbon-free energy by 2030.
Child Sexual Exploitation in Technology
Information and Communication Technologies, such as mobile phones and the internet, can allow access and promote sexual crimes against children, which include the production and distribution of child sexual abuse material and promotion of child prostitution. Just in 2018, the National Centre for Missing and Exploited Children received 18.4 million referrals of child sexual abuse material created by US tech companies.
According to a research study, social media plays a major role in exploiting street children in East Asia through prostitution and shows how easy it is to initiate contact with potential clients, mainly women. The use of social media is not what puts children at risk, but the fact that it is used as a tool to exploit them.
The first step in addressing the issue is to collect more information on ICT’s ability to sell and exploit children. The second is to understand terminology used in these sales and exploitation. Lastly, legislation is necessary to help end the sale and exploitation.
How to Reach Us
SRIC is an exempt organization as described in Section 501(c)(3) of the Internal Revenue Code; our Tax ID is 74-2846727. All contributions are tax-deductible. For more information, please contact Anna Falkenberg, PhD, Executive Director:
285 Oblate Drive
San Antonio TX 78216