Rethinking Intelligence: Human Cognition, AI, and the Future of Work.

AI Ethics
tech
Author

Ndze’dzenyuy Lemfon K.

Published

May 25, 2024


TL;DR

The article explores the relationship between human cognition and artificial intelligence (AI), discussing topics such as cognitive biases, the role of instinctive and logical brains, and the potential impact of AI on society. It suggests that while humans have biases and rely heavily on instinctive decision-making, AI presents opportunities for collaboration and innovation. The article provides recommendations for individuals to prepare for a future dominated by AI, emphasizing the importance of deep work, broad learning, and creativity. Overall, it offers a balanced perspective on the benefits and challenges of AI integration into human life.


Authour’s Note: This is an edited version of an article that was previously posted on a blog owned by the author in 2020.

“The real problem is not whether machines think but whether men do.” ~ B. F. Skinner.

In 1985, a psychologist at Ohio University, Hal Arkes carried out an exciting experiment on 61 college students. He asked them to imagine that they had mistakenly purchased tickets for both a $50 and a $100 ski trip for the same weekend and they could only go for one trip. He then asked the students to imagine that they could have more fun on the $50 trip and then asked them to pick either the $50 or the $100 trip. Most of the students reported that they would go on the $100 trip. The higher cost seemed to matter more than the amount of fun derivable from the trip. It will be essential to realise that irrespective of the trip they chose, they must have spent a total of $150 on buying tickets. Many of us in a similar situation will pick the $100 trip. Not because we gave it any serious thought, but because somehow our brains are weird to associate higher cost with superior value. And for some reason, our very logical brains – as we often imagine them to be – never set out to either validate or invalidate such claims.

Humanity and machines, what is going on?

As progress keeps being made in computer science and artificial intelligence, questions about how this will change our lives, from employment, education, social lives and even our intellectual development have dominated academic and philosophical circles. I find three schools in this raging debate quite interesting. On the one hand, we have those who herald the superiority of computers and preach the gospel of the inevitability of artificial intelligence in assuring that our species continuous to live happily on the planet. This group argues that although artificial intelligence will undoubtedly lead to a significant loss of jobs, it will create just as many jobs – just different types of jobs. The second group argues that technology is permeating our social systems and that we ought to be wary of it. Lastly, we have those who preach that a great union between humanity and machine is the way forward. Although this last group tends to be the least philosophical, they seem to be winning the battle in industry and practice. Across the world, computer-aided systems, most of which are artificially intelligent, are being employed to better the way humans work on several tasks. In these debates, one question of primordial importance has been left out. In the words of B. F. Skinner, the real problem is not whether machines think but whether men do. Are we humans as rational as we imagine? How does our answer to the latter question inform our dilemma on how we should think of and perceive artificial intelligence and the new economy it will subsequently establish?

One of the first lesson one learns in Philosophy is that there are rules on how to think. Yes! If you made an argument, I could convert your sentences into predicates, match it against a logical corpus to label your argument as being either smart or dumb. Learned people, especially those in academic circles, are the high priests of this logical corpus. Of these laws, one of the simplest is the fallacy ad hominem. This law in its most simple terms dictates that we cannot deny an argument because someone of a particular class or classification makes it – the argument must be considered as separate entity from the person that makes it. But if this were the case, as a friend recently pointed out to me, will there be a need for reputations when making arguments? Do you remember that friend who wins all arguments by arguing that his source is the New York Times or a study by Cambridge University? Or that very esteemed personality whose word is gospel truth? If it is wrong to deny an argument because its proposer is known for faulty arguments, then it is just as wrong to accept an argument because its proposer has made many logically consistent arguments in the past – for all you know, that could be the first wrong argument. Even the laws that are meant to help us be more logically consistent are leading us to error? Just how logical are we?

Scientists in many fields have now shown that our decision-making process is divided into two major systems. The first system that we herein call the instinctive brain, that is fast, unconscious, automatic, highly error-prone and is in charge of most of our daily decisions. And a second system, which we refer to as the logical brain, that is slow, conscious, effortful, handles complex decisions and is less error-prone. When you meet someone that looks like that high school bully and your stomach immediately churns, it is your instinctive brain that is at work. On the other hand, the superhero that saves you during that math test is your logical brain. While the instinctive brain is built mostly of biases and heuristics, the logical brain is built of logical rules and works primarily through thinking. How then does this relate to our earlier discussion on artificial intelligence? The questions we shall be looking at are fourfold. Firstly, how much thinking do we do? Secondly, can humans survive without our instinctive brain? If we were to take of all biases and stop heuristics when faced with problems, will we survive? Will such an endeavour be an optimal strategy to guarantee the survival of our species? Thirdly, Is it possible to model the entire world, all the learning systems and knowledge into a corpus similar to the laws of thinking? And can machines fully replicate our instinctive brain, be it now and in the future? And lastly, how should this inform our perception and reception of artificial intelligence? And what opportunity lies in this for us, and how must we prepare ourselves for the second machine age?

Part I: How much thinking do we do?

We humans like to think of ourselves as being in charge, of thinking everything through before taking action. But is that really who we are? Ask a writer how he or she writes well, and they will quickly outline a process and a set of rules they stick to when writing. Easy, you may think to yourself, until you try the exact recipe to no avail. Ask a public speaker, a programmer, a diplomat, and much seems to be the same. Although we need some rules and guidelines to perform most of our daily activities, we realise very often that rules are not enough. Malcolm Gladwell is famous for having made the 10, 000 hour-rule a goal for most people seeking to become experts in a given domain. But what exactly lies behind the 10, 000 hour-rule? Of course, in its most simplistic form, the rule encourages us to repeat or to practice a couple of rules or guidelines 10, 000 hours with the promise of becoming experts. Speed and accuracy are essential aspects of mastery and are exactly what 10, 000 hours will earn you. When we carry out tasks for such an extended period, we build rule-based heuristics. We unconsciously move most of the thinking we do in our artistic thinking from our logical brain to our instinctive brain. We, therefore, gain the speed that is characteristic of the instinctive brain, while conquering in no small extent its propensity for making errors. It, therefore, seems that our lives are a continual process to transfer the activity of our logical brain to our instinctive brain. And with this, one cannot say that we carry out enormous thinking – or at least, as much thinking as we fancy ourselves.

In his book Thinking Fast and Slow, Nobel-prize laureate Daniel Kahneman shows just how prone we are to using bias and heuristics in our day-to-day activities. Let us carry out a popular psychological experiment. Imagine that I present you a bat and a ball that both cost $1.10. The bat cost $1 more than the ball. So how much does the bat cost? If you are like most people, then your instinctive brain convinced you that the bat cost $1.0 and the ball $0.10. Now, look at that again! Do you realise that your answer rather implies that the bat costs $0.90 more than the ball? That was your instinctive brain deceiving your logical brain. Realise how differently you rechecked the problem to see if you were wrong? For most tasks, our instinctive brain only gives way to our logical brain if it finds that it cannot solve the problem immediately. There is a constant war for activity between the two sections of your brain. Sadly enough, the instinctive brain is the most likely winner – after all, everything that passes to the logical brain has to have been turned down by the instinctive brain. Kahneman points out a couple of heuristics that we use in decision making.

Among these, the most common is anchoring. Anchoring is a cognitive bias that results in us fixing judgement on information based on previous information – an anchor. It is common knowledge that first impressions matter. When we are to meet someone for the first time, we try to be at a very best to give the person a suitable anchor on which to judge your behaviour. If I asked you to dip your hand in a hot bucket of water, and then later a bucket of cold water, you are more likely to report the water in the second bucket as being colder than someone who dipped their hands in it immediately. This bias is often well employed by salespersons to beef up sales. Next time, when a salesman starts presenting the goods from the most expensive to the least, realise that he is trying to give you a higher figure as an anchor and keep your calm. To understand more about cognitive biases and study more types, I recommend that you spend some time reading Kahneman’s work.

As human beings, we are more prone to use our instinctive brains and its numerous heuristics and biases because it gives us a simplified version of the world. It makes the world much more friendly and deals away with complexity. It should be the least of a disappointment to realise that while we do possess the capacity to think our brains are much lazier than we imagined. There is indeed an advantage in embracing artificial intelligence as we approach an age that will be dominated by “intelligent” machines.

PART II. Can humans survive without our instinctive brain?

The history of evolution is as much a history of natural selection as it is of time. For billions of years, species have competed with others for limited natural resources – a competition that has often dictated which species lives and which species dies. One attribute that is important for survival is the ability to flee danger; be it danger from predators or from other natural sources. Many anthropologist believe that humans have survived this long because of their ability to pass down information, to communicate and the superior power of their logical brain when compared to other species. Can we then conclude that we can survive the next billion years solely with our logical brains? If we did away with the primal instinct to escape from harm that we share with bonobos and baboons, will we be just fine? The answers to this question are of key importance for two reasons. Firstly, if we can survive without our instinctive brains, then a computerised world wholly based on logical rules will be sufficient for our survival. We could make life much easier by shifting our logical tasks to machines and centring our lives about reducing the world to logical rules – the construction of a computer-driven civilisation. However, if we cannot survive, then our focus has to on optimising the use of our instinctive brains while shifting our logical tasks to computers – the construction of a computer-aided civilisation.

We have shown in part 1 that the instinctive brain supports faster decision making and with proper heuristics, can make the world a more comfortable place to live. If anything, our instinctive brain makes us quicker decision-makers and does away with complexity. If most of our survival skills are transferred to our logical brain, then it will take a hard and long attempt at learning for each of us to gain essential survival skills. Such an arrangement will most likely reinforce the societal inequalities that already exist; leaving those who cannot afford a long, expensive and hard life of learning what it takes to survive with no hope of survival. However, if our survival skillset can be codified into a public ethics and passed down as biases from generation to generation, it empowers everyone to survive on a larger scale and in turn, protects us all as a species. It, therefore, appears that our instinctive brain gives us a fairer chance of survival than our logical brain, and hence is indispensable in our quest for survival as a species.

PART III. Is it possible to model the entire world, all the learning systems and knowledge into a corpus similar to the laws of thinking?

If any field in computer science is to be looked at to draw inspiration on whether our learning systems can be modelled into laws, none is more suited than the field of Natural Language processing. Early on in it’s development, Noam Chomsky argued that finite-state machines could be used to define a grammar, and gave rise to the theory of formal languages. In the most fundamental sense, this grammar set out to define languages by establishing alphabets and the rules in which these alphabets could be combined to make sense for computing. On the other hand, John McCarthy and co advocated for the use of probabilistic and statistical methods that employed learning techniques to the understanding of natural languages. Today, it is no news that the probabilistic and statistical approaches won the debate. But how does this inform our quest to understand if a generalised formal grammar can represent the world?

Firstly, we must realise that such a system of representation will be a backward rather than a forward system. It will only represent what we know at a given time, without living enough room to accommodate future learning and its incorporation into our grammar. One could argue that Mendeleev succeeded to create a periodic table that allowed for the discovery and integration of new elements. And in fact, there are attempts to organise data structures in a similar manner that accurately describes and leaves room for the discovery of new and more useful data structures. But such a system will have to go through a long and demanding process of development, and the cost of failure will be immense. A system of machine intelligence that learns is, therefore, most appropriate, as it pins the corpus of today while not giving up the possibility of evolution.

This leads us to the obvious question of whether machines will ever be able to represent our instinctive brain. The question of whether machines will ever represent our logical brain is not considered as it is clear to all by now that such is a question of efficiency and computational power, not of possibility. As quantum computing and other more efficient paradigms of computing are being developed, such a system cannot be impossible. The real challenge is whether our instinctive intelligence will ever be machine-codified. Will machines need heuristics and biases to learn and think, and will that be more efficient than using a formal representation?

Many reckon that such a development will be akin to the development of artificial general intelligence and that it is nearly impossible. A good number also consider it as being possible. I possess no expertise on this subject, and cannot make a judgement. What I am firmly convinced of, however, is the fact that the failure of machines to represent our instinctive brains today and their superior power at representing our logical brains leaves us with enormous opportunity.

PART IV. How should this inform our perception and reception of artificial intelligence? And what opportunity lies in this for us?

Toby Ord strongly believes that our rather spectacular survival as a species is a result of our ability to communicate and hand down intelligence from one civilisation to the next. The reason you have a car today is because you did not have to invent every single part of it. The wheel you used was invented around 3500 BC in Mesopotamia, and the engine can be credited to James Watt. If anything, computers and artificial intelligence provide us with a robust and reliable way of transferring knowledge from one civilisation to the other. Not just between civilisations, but also between parallel cultures. Such a parallel transfer can spark innovation in previously unimagined ways.

If you want to design and build a vehicle today, you will have to spend enormous amounts of time learning what the people before you have done concerning cars. That is about to change. Artificial intelligence will provide us with the ability to abstract such learning so efficiently that anyone can build a car, without having to learn as much and with fewer mistakes. It will increase the number of persons eligible to take part in creative processes, from making music to designing cars to creating drugs, and make life better for humanity. In his book, Where Good Ideas Come From, Steven Johnson pushes the powerful idea of the adjacent possible.

Simply put, this idea holds that innovation is only possible because the tools are available. When we have a wheel, it dramatically increases our adjacent possible, empowering us to start thinking of things like cars or bicycles. Sometimes we have no idea what likely innovations will arise from a single discovery. Still, the power of the adjacent possible cannot be denied. James Watt had no idea that his invention of the steam engine will, later on, inspire automobiles and aeroplanes. Should we ask the Mesopotamians if they had prior knowledge of the numerous things that could be done with their wheel? Artificial intelligence is going to bring together learning throughout the various stages of our civilisation. And inspire such tinkering and experimentation for so many persons – of varying intellectual capabilities – that we cannot estimate just how better the world will be. If we need to pass on more information to guarantee our survival, then this is the holy grail.

I do realise that there are potential risks of artificial intelligence, that jobs will be lost and that our own survival may be jeopardised by autonomous war machines and the possibility of a machine-led dystopia. But I am also strongly convinced that this leaves us with an enormous opportunity for growth and exploration.

CONCLUSION

In the second machine age – as Eric and Adam call it – the world will belong to the superstars; the opportunity will be widely distributed, but those who are at the edge will be the winners. A millisecond-difference between the running times written by two persons may make one a millionaire and keep the other on food stamps. Employment and the reward for work will change as the world becomes more reliant on artificial intelligence. The question is, how must we prepare ourselves to be relevant in such a world?

1. We must learn to carry out deep work.

Cal Newport defines deep work as “Professional activities performed in a state of distraction-free concentration that push your cognitive capabilities to their limit. These efforts create new value, improve your skill, and are hard to replicate.” At the moment, most of the work we do cannot fit this description. We spend our days doing work that does not push out cognitive abilities to the limit and is easily replicated. Such an approach can in no way make us winners in the new economy that depends on artificial intelligence. We must learn to push our cognitive activities to the limit, and do work that cannot be easily replicated.

2. Learn widely

One thing that will define our creative abilities going forward is the ability to combine ideas and tasks from many fields. You could think of it as building a wide array of heuristics, from various fields. The more unique and diverse your heuristic bank, the more unique and remarkable your ideas will be, and the more reward you are likely to get from them. However, the idea is to have a T-shaped brain and not just a horizontal brain. Although it is useful to be widely read and educated on a wide range of things, one must be a master in at least one domain. Having pursued one thing to it’s limit, the added pursuit of other things makes it easier for us to identify unexplored territory and come up with ideas that are hard to replicate.

3. Be an artist

Whether we like it or not, the world of the future belongs to the artist. Not that anyone who can’t draw or paint is doomed. But instead, that the artistic process will become a more critical factor in determining who succeeds in the new economy. As artificial intelligence spreads, many tasks will be abstracted to levels that demand less rule-following persons and more creative people. We must all become artists. Whether you are writing a book, working at the hospital or translating documents, you must become an artist, carving your niche and creating value that surprises and amazes.

However, we think about it, artificial intelligence is here to stay, and it will only get better. We can fight it, and eventually lose, or we can position ourselves to benefit from it the most. It will take a deep understanding of how we think, and how our interaction with computers may alter our thinking and intellectual processes to come up with a strategy that leaves us as superheroes.

Back to top