The Philosophy of Information – Københavns Universitet

Videresend til en ven Resize Print Bookmark and Share

IVA > Nyheder > The Philosophy of Info...

27. november 2014

The Philosophy of Information

Interview

Visiting professor Jack Copeland from the University of Canterbury in New Zealand met Insight IVA for a talk about the very origins of computing, the future of domestic robots, and the perils of using deductive logic in connection with Big Data.

The father of the modern computer

One of my projects while I'm at IVA is to complete a book called The Turing Guide. Turing was the father of the modern computer, as well as the first pioneer of artificial intelligence—at a time when Big Data was around 64 K. The purpose of our collaboratively written guidebook is to explain Turing's work in a way that everyone can understand, and it consists of 40 or so highly accessible articles that take readers through his life and work. I'm hoping that school kids as well as mums and dads and university professors will find the book useful. It's an exciting and unique project and I'm enjoying watching it come to life. Oxford University Press will publish the guidebook next year.

Getting academic ideas across to the general public is a process that fascinates me. A couple of TV documentaries have been based on my books, both of which are currently showing all round the world. One is a movie about Turing commissioned by the Arte TV channel (made by Les Films D'Ici in Paris); the other is a BBC film titled 'Code-Breakers' and is about Colossus, the secret electronic computer used at Bletchley Park during the Second World War.

The life of Turing is the subject of a Hollywood movie that goes on general release this month - titled The Imitation Game. It tells how Turing cracked the German Enigma code and helped win World War II. Colossus, the first large-scale electronic computer, was built for code breaking in 1943-1944, and the story of the birth of the modern computer is full of drama and secrecy. Colossus and the work done with it remained classified for many decades after the war.

Is the entire universe a computer?

The philosophy of information is at the core of what I do. Information is fascinating stuff and it raises all sorts of philosophical questions—conceptual questions concerning the very nature of information and its embodiment. Internationally, today's students seem extremely interested in the philosophy of information, I guess because information so underpins modern life. Examples of questions in the philosophy of information? Well, there are many—these are just a few of my personal favorites. "Is the human brain a computer—are we cyber beings?" "Is the entire universe one gigantic computer?" "Could you survive death by being uploaded to a computer? Could you take the form of a disembodied virtual being—a cloud of pure information—and live immortally on the Internet?" The idea of uploading a person's mind was pure science fiction 20 years ago, but now people talk about it as if it could actually be done—not right now, of course, but it seems to be out there on the technological horizon. However, there are in fact conceptual problems about this idea, as well as the purely technological problems. What would happen if someone made a copy of the virtual you? Would there then be two yous—both the same person, since both are you, and yet doing different things at the same time, in different parts of the Internet? This seems contradictory, just like saying that you could be in Copenhagen and away from Copenhagen at one and the same time. What leads to a contradiction cannot be possible, so perhaps these simple philosophical considerations are enough to show that you cannot exist as a cloud of pure information—because if you could, then you could be copied, and then wham, we are straight into contradictions.

"I've just started teaching a course in the philosophy of information at IVA, and I'm looking forward to many interesting discussions with IVA students. Do please join us in the course next week if you are interested!

Jack Copeland

Let me give some more examples of questions in the philosophy of information science. Is there really a distinction between digital information and analog information? Is counting on your fingers digital or analog? What counts as information processing, and why?  Is a book or a wall or a carrot an information processor? What about a virus? How powerful can information-processors become as technology advances? In mathematics, there is something called the "Turing barrier", which is said to represent the theoretical limit of information processing. The Turing barrier is like the speed of light for information—it is the upper bound. That's the common view, anyway, you simply can't break the Turing barrier, just like you can't travel faster than the speed that light travels in a vacuum. But not everyone agrees with this traditional view. Why can't the Turing barrier be broken? Is it possible to build hypercomputers—information-processing machines that can compute more than the so-called maximal information processor, the universal Turing machine?

Big Data also raises philosophical problems. Once you have a huge quantity of data, errors in the data will be practically inevitable. It is often not appreciated that it's a theorem of standard deductive logic that if you have a contradiction in your data, then anything and everything can be deduced. Typically today's computers are programmed in accordance with standard deductive logic. So suppose, for example, that a big European surveillance database receives a couple of contradictory weather reports from different sources, and then someone from the military queries the database: "Has Russia launched a nuclear missile?" The database would answer "Yes". It would answer "Yes" to any question, simply because it contains a contradiction. Just one little contradiction is enough. This phenomenon is called "explosion under contradiction". How can explosion under contradiction be avoided? That's one of the philosophical questions around Big Data. It isn't an easy question.

The robots are coming

In the early days, artificial intelligence researchers dreamed of building disembodied human-level artificial intelligences—artificial thinkers that inhabited big mainframe computers. Today's researchers are more interested in building embodied artificial intelligences, such as intelligent robots to work in environments too harsh for humans, like the seabed, or inside a nuclear reactor, or in space. Domestic robots are definitely on the horizon too. Who knows, in a couple of decades people might be buying domestic helpers from Magasin in Kongens Nytorv. I already purchased a robot vacuum cleaner (not from Magasin) but it spends most of its time hiding under the sofa. There's certainly room for product improvement.

The prospect of owning a domestic robot raises philosophical questions. Would your domestic robot have free will—could it freely choose what it is and isn't going to do? For example, suppose you told your robot to poison your neighbor's noisy dog. Could it choose not to obey, refuse to kill the dog out of a sense of morality? This leads one on to the even deeper question of what free will is. I myself believe that a robot could have free will.

Another question is: how could you possibly tell whether your robot really understands what you say to it? Does it chat away mechanically, giving the illusion of understanding what you say to it, when in fact it is just algorithmically producing the right sounds without any actual understanding at all? Turing had some ideas about this, and he proposed what is now called the Turing test. If a robot passes the Turing test then it genuinely understands. That's Turing's argument anyway. There's a lot of hostility towards the Turing test nowadays; people believe it’s a misguided test. But I think Turing was absolutely right and the objections that people raise to his test are in fact based on misunderstandings and misconceptions.

Are you engaged in other research projects?

My research is mostly interdisciplinary—interdisciplinarity has always been a strong driving force for me. Over the next few weeks at IVA I will be completing four research projects. One concerns the history of computer-generated music, a research project that I have been involved in with a New Zealand composer of electronic music. Actually, it was my work on Turing that got me into this: Turing was the first to discover that a computer can play musical notes. Another project is the early history of computer graphics. I don't think anyone in the world apart from me and my collaborator in Los Angeles knows how or when or where the first graphics were produced.

I am also writing a paper about the history of computing in Germany and Switzerland. The Swiss had the world’s first commercial computing center, at ETH in Zürich (where I was a visiting professor in philosophy and computer science last winter). Not many people know this. The traditional story is that the first computing centers developed in the US around large mainframe machines marketed by companies like IBM. But in fact Europe was years ahead.

My fourth project is an article about some technical work in logico-mathematical semantics that was done by Arthur Prior. Prior, who died in the Sixties, was the greatest philosopher New Zealand has ever produced, and he was the foundation professor of philosophy at my university in Christchurch, New Zealand. There is a dynamic research cluster for Prior Studies in Denmark, as well as a virtual archive of Prior's papers. Earlier this year the Danish cluster, headed by Per Hasle, organized the Prior Centenary Conference at Balliol College, part of Oxford University. It's a strange irony that Prior is now better known and admired in Denmark than he is in New Zealand.

Her kan du læse mere om Jack Copelands kursus: http://kurser.ku.dk/course/hiva02017u/2014-2015