Learning in the Age of Artificial Intelligence: Navigating Existential Questions, Ethical Considerations, and the Imperative of Knowledge

AI Ethics
Author

Ndze’dzenyuy Lemfon K.

Published

August 6, 2023


TL;DR

This argumentative piece delves into the intricate relationship between learning and the emergence of artificial intelligence (AI). The author contends that while AI tools like ChatGPT and Github Co-Pilot have gained popularity and challenged traditional learning approaches, learning remains pivotal in the age of AI. The discussion encompasses philosophical, existential, and practical dimensions. The author argues that learning is vital for humans to understand their place in the universe, ensure safety in coexistence with AI, uphold uniqueness and contribute to the quest for truth. Through a thoughtful exploration of counterarguments and examples, the article emphasizes the enduring significance of learning amidst the evolving landscape of artificial intelligence.

NB: ChatGPT generated this summary and Adobe Firefly generated the cover image from a prompt.


In classrooms around the world, it is common knowledge among students that assignments are considerably more manageable with the collaboration of and sometimes total dependence on tools like ChatGPT and Github Co-Pilot. The recent popularity of such artificially intelligent tools has justifiably sparked public discourse on the design of our educational systems and their goals. One such question is whether it is still essential to learn. While there is a strong case against learning - especially as being done nowadays - in the age of artificial intelligence, I argue from an ontological and practical position that learning is still important, if not more important.

Let us begin by exploring the meaning of learning. Of the many available definitions, we should confine ourselves to that presented by the American philosopher and educational reformer John Dewey. According to Dewey, learning is an active, social, and experiential process through which individuals construct knowledge and understanding by engaging with their environments. While I contend that there are considerable disparities between this definition and what is practised, it is worth sticking to because, to a large extent, current educational systems have Dewey’s ideals of learning at the core of their designs.

Having established common ground on the meaning of learning, I shall now defend the aforementioned position; that learning is still important in the age of artificial intelligence.

Firstly, we must learn to find our place in the world. Suppose we conceived the universe as a collection of inanimate, non-thinking entities put together by a grand designer. In that case, it will be easy to say that each entity’s place, purpose and essence is a matter of concern only to the master designer. By their very nature, the constituent entities have neither the capability nor the desire to position themselves in the grand scheme of things. Man, however, is no inanimate, non-thinking entity of the universe, and it is a bid to uphold his rationality and agency that man has, for ages past, sought to define his place in the universe from physical and metaphysical points of view.

By standing on the shoulders of giants, each successive generation of human beings has built on previously known knowledge and considerations to define their place in the universe. The hunter-gatherers were precisely that; hunters and gatherers, theirs was a life meant for hunting and gathering. Fast forward to medieval times, and man’s place in the world was ordained by God and established as a right by birth, to some power and reign, and yet to some subservience and tilling. To the extent that medieval societies learned from their hunter-gatherer ancestors, they placed themselves in the universe in a progressive manner. In keeping with the progressive spirit and appreciating the shortcomings of the medieval order, our times have sanctioned a new regime that proclaims all men equal and free, equal in rights and privileges and free to make of their lives what they would. While we may like to think this view of our cosmic position unchallenged, the advent of artificially intelligent systems adds another layer of evolution that hits our self-image’s heart. For ages, man has prided himself as the universe’s chief entity by virtue of his intelligence’s vastness. What are we if some other entity - artificially intelligent - challenges our claim as the kings of intelligence? The emergence of artificially intelligent systems demands reconsidering our ethics of being. If we had thought that to think was to be, then we must now, in the face of another entity that seems to think, question our idea of being and the extent to which it is shaped by a demonstration of intelligence and how that differentiates us.

These successive attempts to define our place in the universe have been mainly warranted by our constantly evolving environment and sustained by our ability to learn from our predecessors and ask our questions. To live, therefore, is not only to grow older but also to grow in the light of previous generations as it concerns our ability to understand our place, purpose and function in the universe. Should we, as some propose, abandon learning from the past about our environment to our universe, we will, in essence, be renouncing our will to live, a cataclysm that spells the end of man.

A second reason we must learn is that learning is a powerful way to guarantee our safety in today’s world of artificial intelligence; more than ever, the phrase “Knowledge is power” is a truism. Artificially intelligent systems and the paths along which they will develop are questioning our safety. Given, as it were, that these systems will be fundamentally just another entity in the universe with which we must negotiate our existence, our safety will largely depend on our ability to negotiate our co-habitation on terms that we deem favourable. How, then, can we give up learning in the face of a co-inhabitor that will not? Whenever we make a sentence with artificial intelligence and safety, we inspire techno-dystopian visions. While we should not naively dismiss the possibility of such a dystopia, it is more urgent to concern ourselves with the subtle questions regarding the safety of artificial intelligence; will these systems be fair?; will these systems subscribe to our diverse and regional codes of ethics and conduct?; how will we interact with these systems in a manner that prevents our obliteration?

While the Darwinian phrase “survival of the fittest” often carries connotations of physical strength, it is perhaps more accurate in today’s world to observe the “survival of the smartest”. It is a well-known fact in geopolitical analysis that the more intelligent societies can so efficiently organise themselves as to threaten the survival of less intelligent ones regardless of the capacity for physical strength. The frantic race by the Allies to beat the Nazis in developing an atomic bomb is a classic example of how intelligence - which can be arrived at through learning - can be marshalled to guarantee survival. To the extent that artificially intelligent systems will become potent agents with whom we co-inhabit the universe, our intelligence, bit against this artificial variant, will be a decisive factor in our ability to negotiate our safety. The question then follows, if we do not learn, how do we develop our intelligence to practical levels in considerations of safety given the context?

No suggestion is being made as to whether we, as humans, can outsmart artificially intelligent systems. In fact, by their very nature, these systems possess an intelligence that leaves only minuscule room for comparison. The idea, however, is that for the human race to give up learning - or fail to quicken and improve the same - will be to give up the cultivation of our ability to negotiate our existence in the presence of another form of intelligence with the capacity for harm. And so we must learn; more about ourselves and more about our universe, more about artificial intelligence and how this sort of intelligence differs from ours, and more about we can interact with artificial intelligence. Just as learning to make spears and crude weapons granted humans security from the beasts of the earth, it will take some learning for us humans to protect ourselves in the age of artificial intelligence.

Yet another reason we should keep learning in the age of artificial intelligence is the realisation that our uniqueness as humans endows us with unique avenues to contribute towards the universal search for truth. Consider us humans, being different in constitution; we have differing abilities to contribute to the body of knowledge. These differences do not just concern the field in which contributions can and are being made but also how the subjectivity of one individual can aid in the revelation of ideas hidden from others. Given our uniqueness vis-a-vis artificially intelligent systems (think about sentience), shouldn’t it be evident that the abandonment of learning - and the resulting discovery of truth - will only impoverish our quest for truth?

Our quest for truth is like an attempt to fill a dam; having more tributaries flowing into it is all the more pleasant, and each less tributary slows our filling. Recent applications have demonstrated that artificially intelligent systems have the potential to observe reality in ways hitherto unexplored by humans. An example is AlphaGo discovering new patterns in a game humans have played for over four millennia. Let us consider this unique ability of artificial intelligence to be one category of tributaries flowing towards our dam. A second category will be approaches to the truth that humans and artificially intelligent systems can pursue, for example, classifying pictures as being of dogs or animals. A third category of tributaries exists, unique to humans to the best of our present knowledge. These are the tributaries that draw on our sentience, our intuition and our ability to feel. To fill our dam quicker, we should not abandon the development of one category of tributaries simply because artificially intelligent systems seem to outperform humans in the others.

The mere idea that humans may be outperformed in two out of three categories is also not sufficient to inspire the abdication of human development along those tributaries. Consider, for example, the science of Mathematics, practised by a diverse set of enthusiasts with significant variance in capability. Should we argue that only the most capable mathematician should be allowed to study mathematics and that the rest resign themselves to other duties? Certainly not. Such an argument places no value on the collaborative habits that are the bedrock of scientific progress and should be passionately countered by all who love mathematics and hope for its improvement. The same applies to our relationship with artificial intelligence and the universal quest for truth; we should seek collaboration, not abdication.

Up to this point, we have defended the relevance of learning by arguing from a position of utility. Is it possible that learning is a fundamental aspect of human growth and is, in essence, inevitable? Our last defence of learning shall be an exploration in this light.

We must learn because it is a fundamental pillar of our physical and psychological growth that enables us to operate successfully as autonomous agents in the world. Consider a child, innocent as it may be, and driven by its fiery curiosity. Without any forewarning, fire is, for this child, not so much an element of danger as it is fuel for curiosity. It will take some theoretical and experiential learning for fire to become an element of danger. Yet again, it will take some theoretical and experiential learning for fire to become an element of risk and an indispensable tool for survival. Fire is, of course, an isolated example. Yet a quick survey of our daily interactions will make the point concerning the need for learning as an element of physical and psychological growth. It makes no difference if one’s theory of the mind and learning is innateness or tabula rasa; there needs to be some action that brings knowledge essential to life to the forefront.

As with every question, views on the opposing side are not to be summarily dismissed. As concerns the relevance of learning in the age of artificial intelligence, there are two chief arguments against my position of continued learning. The first points out that artificially intelligent systems will - if they are already not so - become better than humans at cognitive tasks, and for efficiency’s sake, it will be reasonable to leave these tasks to the machines. The second argument points out that imagining a future in which human brains interface directly with these intelligent systems and humans predominantly become vessels of action instead of cognition is plausible. I will deal with each argument separately.

Consider first the argument of efficiency. It stems from economic thinking around the division of labour and comparative advantage and is not without merit. The conspicuous Achilles heel, however, lies in the fact that such an argument obliterates the individual. This argument is akin to arguing that the discovery of one woman who is more efficient at eating should immediately dissuade other humans from eating. It should be clear to the reader what is wrong with this argument. While a woman’s efficient manner may benefit society at large, no amount of such efficiency can save her neighbours from the pangs of hunger. A more subtle failing of this argument, which on consideration seems rather careless, is that artificially intelligent systems are not better than humans at all tasks. Suppose we conceded, assuming that we must abandon learning as humans to pursue efficiency. Such abdication will only be correct when artificially intelligent systems outperform humans. As such, the same efficiency will demand that humans continue to learn situations in which they are better than artificially intelligent systems.

Now let us consider the second counter-argument; that a futuristic symbiosis of artificially intelligent systems will reduce human beings to vessels of implementation. Without exploring the immense ethical implications of this proposal, let us realise that it errs specifically in its inability to separate both concerns. Suppose our brains interface directly with artificially intelligent systems. In that case, it immediately raises the question of whether these systems can be considered a part of us, an extension of some sort. Should these systems be considered a part of us, we cannot separate the notion of system learning and human learning; we will be learning regardless. Should they not be considered a part of us, then there is a remnant of our being that is independent and autonomous. I need help to see how such a part will not at least need to learn how to interact with these artificially intelligent systems.

The question under consideration - if learning is still relevant in the age of artificial intelligence - has been one rife with nuance. While I certainly envision that the rise of artificially intelligent systems will challenge our systems, approaches and subjects of learning, I do not waver in my conviction that learning remains essential in our time of artificial intelligence. To the extent that existential angst inspires us to question our place and purpose in the universe, learning is necessary; we must make sense of ourselves, our history and our surroundings even to entertain such questions. We must also learn because survival is seemingly guaranteed only to the smartest, and our uniqueness allows for unique contributions to the search for truth. Why question our survival and slow the pursuit of truth by abandoning learning? And even if we agreed to so debase ourselves, how can we grow if we do not learn? While I may not have wholly convinced you that learning is still meaningful - if not more important - in the age of artificial intelligence, I hope I have provided more avenues for you to consider in your exploration.

Back to top