Emily Horgan: I Think, Therefore I Am.

The thermostat is a large cause of contention in every Irish household. Even as I write this, I’m thinking in my head, “Oh shit, did I turn it down before I left this morning?”

But, what if I was to tell you that you and your thermostats actually have a lot in common? Other than the obvious that both start the majority of your parents’ fights, there is a more subtle and interesting intersection; you are both agents.

It’s odd, I know. But you must not interpret this in a code cracking, crime solving, secret agent way – I mean this in an Artificial Intelligence (AI) way. An agent in AI is considered to be anything that senses the world around it and changes the world accordingly. So, a thermostat is an agent. And we, in essence, are agents.

It’s hard to get your head around the fact that humans could be evaluated in the same breath as a thermostat, but when it comes down to it, we both do the same things – observe, plan react, observe, plan, react, observe, plan, react… infinitely from our creation to our termination.

Unfortunately, we may consider ourselves as above this level of detail but in reality, we are not. 

For example: The thermostat senses the room being too hot, so it decides that reducing the heat would solve this and then reduces the heat. Observe, plan, react. In the same vein, a human may see a car coming towards it. She recognises the danger in this scenario. She plans its route to get out of the way, and reacts by jumping to safety. Observe, plan and react.

I can understand why you wouldn’t be convinced by these two examples. How did the human know the car was dangerous to it? If say, a cat was coming towards it, it would react completely differently. The thermostat on the other hand, has one job with no major consequence, no need to remember anything and no need to think ahead. It just acts. This is because a thermostat is purely reactive.

We, however, are both reactive AND deliberative agents. We think ahead (sometimes) and consider how different actions may have different outcomes, then we choose the most successful one. We do this by using things that we have learned, things we remember. We remember that a car is something that would hurt us, and we recognise the speed it is going at as something that could kill us. We consider what actions we could take, recognise the most successful outcome (not getting hit), and go with the corresponding action.

This marks us as a step above the thermostat, and we momentarily feel special again.

Another example: A few years ago, the slightly eerie Cleverbot became massively popular. Cleverbot is essentially a website where you could have a conversation with what seemed to be a real person online. However, this was an intelligent agent. It remembered conversations it had with other users, and used the words and sentences and phrases they used to react to certain things in response to other users’ questions. It learned, it observed, it planned and it reacted.

Cleverbot ticks all the boxes that the unconvincing thermostat lacked: remembering, learning, considering and forming its reaction based on these abilities. What is even more convincing is the way in which these agents learn in the first place. Unsupervised learning algorithms are often used – where the agent is given a base set of data and then essentially teaches itself what is right or wrong, based on the response given by the user interacting with it. Kind of like humans.

One factor that we all cling onto that convinces us of the sophistication of humanity is this: The majority of the time, we do the observing, planning and reactions all at once. We don’t systematically run through each stage step by step. “I would be out of the way of that vehicle within a split second,” we all think. But would we?

Not only do machines do all of their observing, planning and reacting simultaneously as well, much of them react much faster than us; they are currently not clouded by any ‘human’ factors. They don’t have any emotions that make them paralysed with fear. No lives flash before their eyes. They have no off days. They just step aside and get on with their mission to observe more, plan more and react more.

This raises a huge question in the worlds of (but are by no means restricted to) artificial intelligence, psychology and philosophy. Although agents nowadays are so massively sophisticated and can do the same tasks as us to the same level – if not higher – should they not be allocated an awareness of how these events will effect them, as agents?

Should we be allowed to create intelligent agents that are every bit as responsive as us, if not more, minus any free will? Should a conscious agent own its body?

I’m sure none of us see our emotions or the presence of a subconscious to be biological drawbacks, but when designing an agent for a specific task, it most certainly is. Many people think that AI developers are working hard to make robots that mirror humans, but this is not the case. In fact, this would largely be considered a huge waste of time. We already have an extremely efficient way of reproducing humans, so why bother?

And plus, we could do BETTER. We could do SMARTER. We could do MORE OBEDIENT.

Which is where is gets complicated.

When designing a robot to go into a coal mine to detect toxic gases, we don’t want it to turn around and refuse because they are scared. When designing a space agent, we don’t want it to decide against take off because space seems like a lonely place. When designing a sex robot, we don’t want it to say no. In essence, they are designed to say yes to what most humans would refuse.

And the big question that has been raised is how ethical this really is. Ethical not only for the robot itself (machine ethics) but for the person creating it (roboethics). These terms may seem ridiculous, but I guarantee you they are being used on an almost daily basis in companies that are making waves in technology, artificial intelligence and robotics. If something goes wrong, who is to blame? If we give an agent a level of consciousness, will it have a right to itself or a right to say no?

Delving further into this, a robots’ entitlement to its own body is something that could become an issue. Surely if something can think for itself, it should be held responsible for its actions and the consequences that follow. The Turing Test, named after pioneering scientist and logician Alan Turing, was created decades ago when computers were barely on the cusp of the level of intelligence we see now. Turing foresaw the issue we are facing – deciphering whether a machine was really thinking, or just pretending to be thinking. He formulated a method of distinguishing the methodologies of intelligence between humans and computer programs.

In June 2014, Eugene Goostman passed the Turing Test.

Eugene Goostman was a program, not a human. Eugene Goostman was a chatbot – not unlike Cleverbot. Eugene Goostman was perceived to be indistinguishable from a human.

Of course, there are sceptics who attempt to justify Eugene’s successful passing grade. Some claim that the program based its persona on a teenage boy who spoke a different language; the errors in language translation must have confused the judges into believing it was thinking and not calculating a response. But even if that was a factor, we must accept that this machine was intelligent enough to take on the character of a teenager.

Plus, what could be more human like than copying someone else’s persona to confuse people into acceptance?

Emily Horgan is a final year student of Computer Science in University College Cork. She has interned in companies like Image Publications and Tapadoo, an app development company based in Dublin. She tweets at @emileee_rose and shares pictures at @emleeh on Instagram

(Pic via)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s