Philosophy

Paradoxes of the artificial: learning from the past to predict a future whilst misinterpreting the present

Hear my words, humanity is the awkward pubescent phase between animal and machine
– Netjester AI Bot (2011)

The University of Sydney recently posed the following essay dilemma to its students:  ‘Big data’ is not just more or different information but constitutes a revolution in the production of knowledge: algorithms and ‘machine learning’ are generating masses of information that lie beyond human grasp. In this context, who should be responsible for the ethical production, curation and use of this data?

Of course, my interpretation of the question included Artificial Intelligence (AI) as the next iteration of algorithms and self-learning machines. Below is my response to that essay question.

Introduction                              

If mathematics is the language of the Universe, then existentialism is the language of mankind. Our use of numbers in applied physics and engineering has produced a reality at the intersection between man and nature, in which we use material from the earth to create a technological world. Such technological advancements have in turn shaped and defined the knowledge we have of ourselves, each other, and our collective place in the cosmos. It seems that to be a human in the world naturally constitutes a hand-in-glove relationship with technology; digital and otherwise. Our cities can be viewed as extensions of our physical body, our electronic networks such as telephones, radio and television as extensions of our nervous system, and the computer can be seen as an extension of our cognitive functioning (McLuhan 1964). With big data and cloud computing, we are now in the age of consciousness being the latest symbolic gesture for how we use technology as an avenue for collective human self-inquiry.

Claims that the use of big data, at the hands of artificial machines, can constitute a revolution in knowledge makes the assumption that we are separate to the technologies we both create and consume. The programming of artificial intelligence (AI), and the data it subsequently uses to learn from, is made at the hands of people who are ladled with subjective biases, worldview beliefs, and perceptions of right and wrong, and this in turn dictates how AI behaves (Copeland 2015). Our conceptualisation of AI would also have us believe that there is a dualism between ‘artificial’ knowledge compared to our ‘natural’ knowledge, as though there is a distinction between thinker and thought, or of man and machine. With technology, we are both the marble and the sculptor.

Machine learning is therefore not one of knowledge generation, but rather an inquiry into what it means to be human. Machines have neither existence nor purpose without people creating them and giving them something to work with. We as individuals, whether via our own bodies or through our internet-enabled devices, including the data sensors we place on physical objects, produce ‘little’ data, which is then treated as ‘big’ data by AI on a collective scale (Bollier and Firestone 2010). The real issue is trying to understand our incessant need to record ourselves online and, in turn, why we want to predict, control and prophesise about such recordings, and how. Such a desire is akin to creating ourselves as an all-knowing, all-seeing God. The consequence of such a predictive reality, however, if it were ever realised, would leave us feeling somewhat lost, as so eloquently put by Alan Watts (1973):

This is perhaps what Western man would himself like to be: a person in total control of himself, analysed to the ultimate depths of his own unconscious, understood and explained to the last atom of his brain, and to this extent completely mechanized. When every last element of inwardness has become an object of knowledge, the person is, however, reduced to a rattling shell.

In this essay, I will argue that the question of who should be responsible for the ethical use of big data is in itself a paradoxical conundrum, as no appreciation for its creation and usage can come without an understanding of the human-technology relationship. Machine learning does not generate knowledge as a separate, standalone entity. Instead, AI elicits mankind’s progression of technology which in turn gifts us with knowledge about ourselves. Morality then becomes the level of magnification for how we wish to view ourselves in relation to the world and our technological creations, in this context – big data and artificial intelligence.

Humans and Technology

As humans, we have always wanted to record our actions. From Egyptian hieroglyphs and cave man paintings, to modern day business and the profession of accounting, which is aptly named to ensure we are all ‘accountable’ for our actions, it appears that the record of what we have done has been favoured more so than what it is we actually do. With big data, we now have a record of our most public, private and secret thoughts – a claim Google made back in 2010 as before their search engine, we never asked the types of questions we place into their online void, and we all now carry computers in our back pockets in the form of mobile phones, which track where we go and who we talk to, either verbally or online (Hillis et. al 2012). At this exact moment in time, there has never been so much knowledge up for grabs. The history of our species invisibly floats around our heads, thanks to wireless internet, and taunts us with all the things we do not yet know. However, our need to record ourselves is just one of the many practices we have carried forth throughout the age of man.

Since the emergence of modern civilisation, we have protected and perpetuated certain human practices, such as war, religion, sport, politics and business (Rutherford 2016). Technology is present in each and every practice as our cities, houses, ability to hunt and even our clothing are all forms of technology. Although technological advancements have enabled us to change the scenery of the world, we have seldom changed the situation associated with our practices. In this regard, technology often propagates a false sense of progression even though, thanks to advances in science, we now live longer and are supposedly in our most ‘intelligent age’ (Holler et. al 2014). We evolve in one regard but are bounded to the physical world and our social practices in another regard. This ‘being’ in the world has always been ladled with human conflict and issues of morality; long before we programmed our beliefs into artificial machines (Moor 2006). As creatures of habit, we reinforce the presuppositions of our practices each time we enact them, and this is predominately how big data and AI are used – to record, explain and predict the outcomes of our social practices, either to exploit us or to supposedly help us and environmental causes (Hofman et. al 2017). However, in order to critically question how artificial intelligence is being used, and by whom, we have to first understand what this technology is and what decisions these machines are making.

When big data first emerged on the Gartner-Hype cycle in 2010, which is an annual graph that depicts the phases of a technology’s expectations versus its actual use in society, many believed that by having more data we would be able to solve a myriad of worldly problems (Frizzo-Barker et. al 2016). Computer databases, however, were not new concepts prior to the big data hype, and for any given situation that requires a decision to be made, data choices are always infinite. It was the astronomer Carl Sagan who famously said “If you wish to make an apple pie from scratch, you must first invent the Universe” (Johnson 2016). The data for any given decision can go back to the origins of the Big Bang. In the context of ubiquitous computing, the volume of data grows exponentially so much so that our digital universe and the data mining of it has created an information paradox. There has never before been so much digital information to take into consideration for any one decision, which means that when a decision is made by a machine, that decision has to be based on certain criteria. The outcome of the decision, however, then elicits new data which then goes back into the pool of available data (Abbasi et. al 2016). The biases of machines are self-perpetuating and reinforce themselves.

Unlike more primitive computer algorithms which are based on ‘if-then’ rules, self-learning algorithms are trained with existing data for which they recognise patterns, and they then learn to apply those patterns to new data situations, so long as they are of a similar nature (Hutson 2017). There are no ‘rules’ to follow as the algorithm instead learns to adjust itself in response to new data. This new data, however, becomes lifeless as it records what was rather than what currently is taking place; something that is arguably uncatchable, and also infinite, in line with the fleeting nature of each passing moment (Ara et. al 2016). Nevertheless, machine learning attempts to provide a basis for data mining based on our actions and behaviours. Automatic and semi-automatic machines mine such data and allow us to outsource all or parts of our decision making in various contexts (Witten et. al 2016).

For example, in the past ten years, big data at the hands of AI has been explored in relation to healthcare, government and law, geospatial purposes and bio-technology (Lu and Liu 2016). Artificial machines have been used to predict the likelihood of someone injuring themselves, which can affect insurance premiums and claims (Tixier et. al 2016). Machines have been used to predict vegetation health in countries where environmental data is scarce (Burchfield et. al 2016), as well as being used to both hire and fire employees in workplace contexts (McClure 2017). Such decisions are either completely left to the machines, such as AI that determines what we see on our social media feeds, or are used as a support mechanism to enable certain people to make ‘better informed’ decisions (Davenport and Kirby 2016). In either context, we need to question two things: the quality of both the algorithm and the data used, and the role of people who stand behind both.

paradox

A Hidden World

Compared to an artificial machine, a person can love someone but hate what they have done. A machine cannot yet fathom paradoxical dilemmas despite the fact they produce them. In statistics, there is what is known as the ‘Simpson’s paradox’ in which a trend in a particular group of data, when compared to another group of data, can become reversed when the data from both groups is combined (Neufeld 1995). Advanced algorithms, which engage in so-called deep learning for improved decision making, are capable of producing such a paradox (Fabris and Freitas 2000). However, the AI would not be aware that it had. As a result of this, algorithms and their decision-making outcomes can be seen as creating a form of social control as they are secret, opaque, and use intangible data – somewhat lifeless data that is treated for real-life situations (Mazlish 2009). Similar to humans, AI uses the past as a way to understand current contexts, and makes either a decision or a prediction about such data. The lack of transparency, however, of not knowing how and why such a decision was made by a machine, and not knowing whether something like the Simpson’s Paradox took place, is somewhat swept under the rug. We tend to abide by the manta ‘in algorithm we trust’ and forget about the logic behind algorithms and the sources of data they use.

For example, in her book Weapons of Math Destruction, mathematician Cathy O’Neil alludes us to the perils of biased and unscrutinised algorithms. In some contexts, big data is eliciting greater levels of inequality for certain areas of society. Algorithms can be seen as racist by targeting certain populations for law enforcement based on crime prediction, or can be seen as sexist by reducing the number of job adverts presented to women online compared to men (O’Neil 2016). The programmers of such algorithms, however, would likely consider themselves as being good and decent human beings. The bias in programming of machines, who then learn for themselves, propagates other biases such as the machine’s selection and avoidance of certain data in their mining searches, and what the machine perceives as being poor or good quality data according to its processing criterion (Witten at. al 2016). However, the data that is selected by the machine is also biased in nature.

We tend to think of big data as information that exists somewhere ‘out there’ in the world and is therefore objective. The opposite is true. Along with how we program our machines, the data we feed into them comes from subjective human experiences which are biased by default (Paterson 2017).  Such biases are nothing more than how we have each chosen to interpret reality. Our worldly activities and opinions become programmable and therefore collectable as data sources, which we then quantify through computational and statistical analyses (Michelucci 2013). We live in a world in which real-life people are viewed as a pattern of lifeless binary ones and zeros. Our lack of awareness in not knowing that we are part of the big data and AI phenomenon often means we fail to question how our data is collected and used. For example, android users would likely be oblivious to the fact that, by default, their Google maps application tracks their movement (Gisdakis et. al 2016). However, this could easily be argued as a fault of the user and of ‘buyer beware’, as most of our data is given away with our consent, such as agreeing to terms and conditions when we sign up to social media platforms. To debate who should be responsible for the ethical production, curation and use of big data is therefore erroneous. We all choose to participate and abide by the somewhat abstract rule that computerised technology should reign supreme in our lives. In the information age, ignorance has become a choice.

Although our participation in creating big data does blur the line between data collection, people monitoring and outright societal surveillance, we ourselves cannot draw a line in the sand to separate these concepts, even after our data consent is knowingly given or revoked. Alarmingly, very few people know how our digital breadcrumbs are being used or to which artificial ‘black boxes’ data is being fed into (Castelvecchi 2016). This year MIT released a report in which it was noted that programmers do not fully understand how their artificial machines work, and we now have people attempting to monitor the machines who monitor the world (Knight 2017). In a way, we are monitoring our own shadows. This unknowing means that the decisions AI makes on our behalf are often inscrutable. This also gives us our main dilemma regarding the future of AI, and that is whether we should stop now before we lose control of our own artificial creations.

Issues of Morality and Control

The intelligence of AI has been coined an “existential risk” for humanity (Bostrom 2002). Machine learning will keep advancing and one day become super intelligent. In turn, machines will make decisions based on harm or good. Another way to look at this is by posing the following question: If a super intelligent AI can think one million times faster than the minds that built it, will it make an ethical decision to keep both its maker and the planet alive, despite the fact itself could survive without either and merely be powered by the sun? (Bostrom 2014). Otherwise known as Moravec’s paradox, we have programmed our machines to be logic-based in which they work in a manner that is different to our own biological brain processes (Thorpe 2016). Without sentient awareness, AI may become so advanced and integrated into our lives that it might make the logical conclusion that humans are a plague to the planet, and its decision will be to eradicate us all. The super intelligent machine cannot fathom basic human rights, or gauge right and wrong based on intuitive judgment.

Although such a scenario is regarded as a ‘future problem’, it does bring to the fore a philosophical interpretation of our pursuit of AI. This notion aligns to my earlier claim that all technological pursuits end up revealing knowledge about ourselves, as opposed to AI producing knowledge of its own accord. For example, it would seem too obvious to discuss immoral uses of AI, such as the rise of AI robots that are replacing people through job automation (Burton et. al 2017), or the invasive dehumanising procedure of AI and micro-chipped enabled drug treatments and bodily recordings (Rode et. al 2017). Adversely, it would seem too easy to discuss the so-called positive uses of AI, such as AI in cancer-spotting algorithms (Waal 2017), or through our ability to now measure, record and predict the man-made effects of global warming (Olsen 2017). We cannot argue what is ‘good’ AI without having knowledge and awareness for what we consider as ‘bad’ AI, as both exist in the same one world. There is always implicit unity in all perceived states of duality, which means there can always be good in the bad, and bad in the good. This is the exact ethical dilemma we find with AI and self-driving cars. In certain scenarios, the programmer of the car’s algorithm has to choose to save the lives of those inside the car, or save the pedestrians on the street. In either case, lives will be lost regardless of the decision made (Bonnefon et. al 2016).

Instead of looking at right and wrong uses of AI, we should instead explore AI from a level of magnification. From the vantage point of the moon, the earth looks peaceful and harmonious, yet from the frontline of war, the world looks hateful and chaotic. Both perceptions can exist at the same one time. The level of magnification we should be taking with AI is fundamentally the scrutinzation of ourselves as human beings. In all our uses of big data and AI, we have revealed that we value one primary thing in relation to its place in our lives, and that is control. Our use of AI is symbolic for us wanting to either to control our position and status in the world, or to control and predict our survival to ensure our future. Such desire for control has us grabbing and grasping at life instead of actually experiencing it. Ironically, we want to control the world with AI, but fear that we cannot control the thing currently doing the controlling. We then act surprised that our machines might be able to one day control our fate, despite the fact this is exactly what they were programmed to do in the first place.

Second to our value of control is the value we place on knowledge. Mastery of the world and knowledge about it has, arguably, been at the fore of Western civilisation. We are insatiable in our thirst for knowledge, but that thirst has again only ever revealed knowledge about ourselves. We have chosen to value knowledge in relation to how we symbolise the world as opposed to how we experience the world. We live in an age where knowledge is cast through the lens of numbers, words, and social rules. Perhaps this is why we do not like the knowledge that comes from our pursuit of AI. We hate the realisation that the control of AI might be over our deaths, as opposed to our longevity. This also means that prediction at the core of our AI usage is illusory as it is based only on a temporal reality. The data that AI uses to make its decisions is captured in fragmented moments and therefore does not capture the infinite data that feeds into each unfolding moment. The world is based on subjective experience – experiences that only ever unfold, are only ever revealed, and can only ever be changed, by embracing life at the here and now.

Conclusion

With our pursuit of AI, it seems that the underlying, existential knowledge we are trying to reveal to ourselves is this: our societal preference for a certain, logical type of intelligence, and our belief and value in the recordings of life and our desire for controlling its future, may end up being the very cause of our own demise. Instead of clinging to life and wanting to cement our place in the universe, we perhaps need to learn to enjoy life in the constantly unfolding present moment. Our true power and knowledge about ourselves, and the world, may in fact come from relinquishing control. The world is very much a paradoxical place on purpose. If we try and control life with our artificial machines, we may end up losing our perception of reality in the process, and of life altogether. After all, when we hold onto our breath, we lose it.

*** contact me for the reference list ***

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s