Trusting the Machines: Technology and the Quest for a Just World.

AI Ethics
tech
Author

Ndze’dzenyuy Lemfon K.

Published

May 25, 2024


TL;DR

The essay explores the ethical and societal implications of technology, particularly focusing on software and algorithms. The essay argues that blind reliance on technology can lead to significant ethical and social issues. It emphasizes the need for a balanced approach to technology adoption, advocating for ethical education for developers and a basic understanding of technology for the public to ensure transparency, fairness, and trust.


Authour’s Note: This is an edited version of an article that was previously posted on a blog owned by the author in 2020.

“If we continue to develop our technology without wisdom or prudence, our servant may prove to be our executioner.” ~ Omar Bradley (General, US Army)

Following the tragic death of George Floyd, the technological repercussions have been significant yet often overshadowed by the political aftermath. Amazon and IBM, initially supporting police with facial recognition software, have retracted their support, with IBM exiting the industry entirely. This move sparked a discussion led by Google, which had previously distanced itself from such technology, citing ethical concerns.

Many studies have demonstrated that facial recognition software significantly underperformed with people of colour and ethnic minorities. In practice, this meant that police officers relying on such software were highly likely to misidentify black persons as criminals. While our technology-is-superior complex will have us believe that adopting more technology – even in police departments – is better and fairer, there are raging debates surrounding the potential downsides of deploying such technology when they stand in the way of justice and fairness.

These debates draw our attention to an often-ignored question: what exactly is our moral and ethical relationship with algorithms, software, technology, and how will they shape, and be shaped by the future?

From your Netflix account to your Facebook timeline, suggestion and search algorithms lord over what you see and interact with. To keep us coming back, most social media feeds are curated to highlight content that we are likely to like or agree with, thereby strengthening our biases and reinforcing what is often a single and lacking opinion of the world. Instead of becoming an avenue for cross-border and cross-cultural interactions, social media becomes an opportunity to gang up for war against people we disagree with.

On a broader scale, it is just not only towards algorithms that our reliance is blind but technology as a whole. It is well known that if you would win debates against your peers, you must be more adept at googling. We all have that friend that makes a ludicrous claim, googles a website run by I-do-not-know-who and in a voice more poetic than Peter Drury reads a supporting article line by line to our hearing. At the same time that we know this, we become more receptible towards our friend’s claim because – as they say – “We found it on the net”, forgetting that we could find countless websites refuting any claim.

Our tendency to blindly trust technology can lead to what I term ‘extrapolation of trust.’ We assume that because technology powers reliable tools like calculators, it must also be infallible in other areas, a fallacy I call ‘Pro Hominem’ (the mistake of embracing the argument by virtue of its proponent’s character). In deferring, as we often do, to technology, we have a significant likelihood of committing the Pro Hominem fallacy more than we may realise.

This tendency for universal, unquestioned acceptance of the moral superiority of technology and for the purposes of this issue, algorithms, is one of the greatest dangers of our time. If left unattended, it could create a new stratification of society, with scientists and algorithmist as the bourgeoisie and the rest as the proletariat. What happens when mastery of this abstract processes that govern the societal order are left to a select group of people (the builders)? Even if one argued that the mere moral uprightness of such persons would make such a worry irrelevant, we would be casting a blind eye on the immense political power that ends up in the hands of a few undemocratically chosen persons.

Today, our society can boast, perhaps more than many before us, in our significantly high levels of political and civil participation; that a significant fraction of the population understand and participate in the enforcement of the laws and processes that govern it. There is a real risk that if we are not intentional about technology adoption, many of these very welcome transparencies will be shrouded in secrecy and delegated to an elite class of clairvoyants.

Consider for example, that in most of the developed world, algorithms make decisions about credit-worthiness that are key to accessing capital. “Sorry, the computer says no”, is all a confident banker says to a bewildered client. The sad reality, however, is that a considerable number of such bankers do not understand how this algorithm works and have never cared. The reliance on these systems robs the bankers of the intellectual freedom to criticise such decisions and the agency to act against them. It is often sufficient that management ordered the use of such technology.

Now let us imagine a more personal scenario. Imagine that you were in good spirits and only visited the hospital for a dental procedure. A new law requires all patients to complete a computer-aided health scan before accessing any form of health care at hospitals. After completion, the operator announces that you have been diagnosed with cancer and should commence treatment immediately. Most people I have shared these mind-experiments with agree that they will challenge the diagnoses and seek further checks.

I think that the response to the second scenario should be our general response towards the integration of technology into our social fabric; a sort of cynical collaboration that appreciates the fallability of computer agents while seeking to leverage their unique intelligence and accuracy to make life easier.

It is also important that as computers and algorithms abstract many crucial decision making in society, we are more proactive in making them fairer and engendering trust from the general public. Much as it is considered good civil practice to have a basic understanding of the law and ethics, it should be considered good civil practice for citizens to have a basic understanding of how algorithms work and the complexities that surround their integration into the social fabric.

For the people who build these systems and algorithms in particular, a basic understanding of ethics and the willingness to contemplate implications is critical. Computer scientists and algorithm designers must understand how their algorithms – which may not be understood by most people – relate to the law and societal procedures. They must understand the ethical implications of their work and be ready to take responsibility for them. There have been numerous movements of this kind in the fields of machine learning and artificial intelligence. One such movement, which is quite interesting is OpenAI, which strongly believes that the “good guys” should work to build the systems on which our world relies before the “bad guys” do. There is a broader need for such movements to transcend every field of computer science and algorithm design, as the deterministic nature of most algorithms that are deployed in our society are perhaps the best argument that the only way out is a more intelligent design of these algorithms coupled with a more intelligent use. Ensuring that builders are ethically educated could lead to more intelligent design, and ensuring a basic technological education for users could lead to more intelligent use. In the same manner, it will be beneficial if technology users have a grasp of the law and ethics, as this will encourage a broader societal discussion around our use of technology and ensure a more democratic and fair approach towards our acceptance of technology.

All this will be pointless if technology that is developed in the public interest is developed in dark rooms. We cannot claim with sufficient grounds that the scientists at IBM and Amazon who developed facial recognition software were unethical or that there were no ethical citizens to point out the ethical ramifications of the above-mentioned technology. What we can say confidently, however, is that the likelihood of such a faulty software having been adopted by the police, in the public interest, will be mitigated if there had been public and open debate around its development. As a society, we will need more transparency in the process of abstracting crucial decision making to computers to inspire collective trust.

While some argue that ethical considerations hinder innovation, it is vital to view technology as part of a broader system. Progress should not come at the expense of our social fabric. Intelligent design and adoption of technology must consider ethical and social dimensions to create algorithms we can trust. We can and have to do good work in making sure that we have algorithms that we can trust.

Back to top