芬蘭赫爾辛基大學 人工智能課程 5萬人在學,還不趕緊來學 part 4

III.Philosophy of AI

人工智能的哲學

The very nature of the term “artificial intelligence” brings up philosophical questions whether intelligent behavior implies or requires the existence of a mind, and to what extent is consciousness replicable as computation.

“人工智能”這一術語的本質帶來了哲學問題:智力行為是否意味著或需要 mind 的存在,以及意識在何種程度上可作為計算而複製。

The Turing test

圖靈測試

Alan Turing (1912-1954) was an English mathematician and logician. He is rightfully considered to be the father of computer science. Turing was fascinated by intelligence and thinking, and the possibility of simulating them by machines. Turing’s most prominent contribution to AI is his imitation game, which later became known as the Turing test.

艾倫·圖靈(Alan Turing,1912-1954年)是一位英國數學家和邏輯學家。他被認為是計算機科學之父。圖靈著迷於智能和思維,以及通過機器模擬它們的可能性。圖靈對AI的最傑出貢獻是他的模仿遊戲,後來被稱為圖靈測試。

In the test, a human interrogator interacts with two players, A and B, by exchanging written messages (in a chat). If the interrogator cannot determine which player, A or B, is a computer and which is a human, the computer is said to pass the test. The argument is that if a computer is indistinguishable from a human in a general natural language conversation, then it must have reached human-level intelligence.

在測試中,人類詢問者通過交換書面信息(在聊天中)與兩個參與者A和B進行交互。如果詢問者無法確定 A 或 B 哪個玩家是計算機,哪個是人,則認為該計算機通過了測試。這種論點是,如果在一般的自然語言對話中,計算機與人類無法區分,那麼它一定已經達到了人類的智能水平。

What Turing meant by the test is very much similar to the aphorism by Forrest Gump: “stupid is as stupid does”. Turing’s version would be “intelligent is as intelligent says”. In other words, an entity is intelligent if it cannot be distinguished from another intelligent entity by observing its behavior. Turing just constrained the set of behaviors into discussion so that the interrogator can’t base her or his decision on appearances.

圖靈所說的測試與阿甘的格言非常相似:“做傻事的才是傻瓜”。圖靈的版本將是“intelligent is as intelligent says”。換句話說,如果一個實體不能通過觀察其行為而與另一個智能實體區分開來,那麼它就是智能的。圖靈只是把這一系列的行為限制在討論中,這樣審問者就不能把她或他的決定建立在外表上。

One problem: does being human-like mean you are intelligent?

像人類一樣意味著你聰明嗎?

One criticism of the Turing test as a test for intelligence is that it may actually measure whether the computer behaves like a human more than whether it is intelligent. The test has indeed been “passed” by computer programs that keep changing the subject, make plenty of spelling errors, and sometimes refuse to respond at all. A famous example is Eugene Goostman, a 13-year-old Ukrainian boy who constantly tries to avoid answering questions by making jokes and changing the subject to his pet guineapig. Here is a short segment of an example conversation with Eugene Goostman from the blog of Scott Aaronson:

圖靈測試作為智力測試的一個批評是,它實際上可能衡量計算機是否像人一樣工作,而不是它是否聰明。這項測試確實已經通過了計算機程序的測試,這些程序不斷地改變主題,產生大量的拼寫錯誤,有時甚至完全拒絕回應。一個著名的例子是尤金·古斯特曼,一個13歲的烏克蘭男孩,他經常試圖通過開玩笑和把話題轉到他的寵物吉尼亞犬身上來避免回答問題。以下是斯科特·阿倫森博客中與尤金·古斯特曼的一段對話示例:

Is Eugene a computer or a person?

尤金是電腦還是人?

Interrogator: How many legs does a camel have?

駱駝有幾條腿?

Eugene: Something between 2 and 4. Maybe, three? :-))) By the way, I still don’t know your specialty - or, possibly, I’ve missed it?

尤金:大約在2到4之間。也許,三個?順便問一下,我還是不知道你的專業——或者,可能,我忘了?

Interrogator: How many legs does a millipede have?

審問者:一隻千足蟲有幾條腿?

Eugene: Just two, but Chernobyl mutants may have them up to five. I know you are supposed to trick me.

尤金:只有兩個,但切爾諾貝利核電站的變種可能有五個。我知道你想騙我。

Interrogator: No, I need to know that you’re not a chatbot. Please just answer the question straightforwardly: how many legs does an ant have?

審問者:不,我要知道你不是聊天機器人。請直接回答這個問題:螞蟻有幾條腿?

Eugene: Something between 2 and 4. Maybe, three? :-))) Oh, what a fruitful conversation;-)

尤金:大約在2到4之間。也許,三個?:-))哦,多麼富有成效的談話;-)

Interrogator: Do you understand why I’m asking such basic questions? Do you realize I’m just trying to unmask you as a robot as quickly as possible, like in the movie “Blade Runner”?

審問者:你明白我為什麼要問這些基本問題嗎?你知不知道我只是想盡快揭開你作為機器人的面紗,就像電影《銀翼殺手》裡那樣?

Eugene: ...wait

Eugene Goostman, who in reality is a computer program, fooled 10 out of 30 judges into thinking he is a real person.尤

金·古斯特曼實際上是一個電腦程序,他騙過三分之一的評委,認為它是一個真正的人。

The Chinese room argument

The idea that intelligence is the same as intelligent behavior has been challenged by some. The best known counter-argument is John Searle’s Chinese Room thought experiment. Searle describes an experiment where a person who doesn't know Chinese is locked in a room. Outside the room is a person who can slip notes written in Chinese inside the room through a mail slot. The person inside the room is given a big manual where she can find detailed instructions for responding to the notes she receives from the outside.

一些人對智力與智能行為相同的觀點提出了挑戰。最著名的反駁是約翰·西爾的中文室內思維實驗。塞爾描述了一個實驗,一個不懂中文的人被鎖在一個房間裡。房間外面有一個人,他可以把中文便條投到房間裡。房間裡的人有一本手冊,在這本手冊裡她可以找到詳細的使用說明,來回應從外面收到的便條。

Searle argued that even if the person outside the room gets the impression that he is in a conversation with another Chinese-speaking person, the person inside the room does not understand Chinese. Likewise, his argument continues, even if a machine behaves in an intelligent manner, for example, by passing the Turing test, it doesn’t follow that it is intelligent or that it has a “mind” in the way that a human has. The word “intelligent” can also be replaced by the word “conscious” and a similar argument can be made.

Searle 認為,即使房間外的人得到的印象是,他正在與另一個說漢語的人交談,但是房間內的人不懂漢語。同樣,即使一臺機器以一種智能的方式運行,例如通過圖靈測試,也不能說明它是智能的,也不能說明它有人類那樣的“mind”。“intelligent”一詞也可以被“conscious”一詞所代替。

Is a self-driving car intelligent?

自動駕駛汽車是智能的嗎?

The Chinese Room argument goes against the notion that intelligence can be broken down into small mechanical instructions that can be automated.

The Chinese Room argument 與以下觀念相悖,什麼觀念呢,智能可以分解成自動執行的小的機械指令。

A self-driving car is an example of an element of intelligence (driving a car) that can be automated. The Chinese Room argument suggests that this, however, isn’t really intelligent thinking: it just looks like it. Going back to the above discussion on “suitcase words”, the AI system in the car doesn’t see or understand its environment, and it doesn’t know how to drive safely, in the way a human being sees, understands, and knows. According to Searle this means that the intelligent behavior of the system is fundamentally different from actually being intelligent.

自動駕駛汽車是可以自動化的智能元素的一個例子。然而,The Chinese Room argument 表明,這並不是一個真正智能的想法:它只是看起來像。回到上面關於“行李箱詞彙”的討論,車內的人工智能系統看不到或不瞭解其環境,也不知道如何安全駕駛,就像人類看到、理解和知道的那樣。根據塞爾的說法,這意味著系統的智能行為與實際的智能有著根本的不同。

How much does philosophy matter in practice?

哲學在實踐中有多重要?

The definition of intelligence, natural or artificial, and consciousness appears to be extremely evasive and leads to apparently never-ending discourse. In an intellectual company, this discussion can be quite enjoyable (in the absence of suitable company, books such as The Mind’s I by Hofstadter and Dennett can offer stimulation).

對智能,自然的或人工的,和意識的定義似乎是非常含糊的,並導致明顯的永無止境的辯論。在人工智能公司,這種討論是非常愉快的(在沒有合適的公司的情況下,像霍夫斯塔特和丹尼特的《心靈I》這樣的書可以提供刺激)。

However, as John McCarthy pointed out, the philosophy of AI is “unlikely to have any more effect on the practice of AI research than philosophy of science generally has on the practice of science.” Thus, we’ll continue investigating systems that are helpful in solving practical problems without asking too much whether they are intelligent or just behave as if they were.

然而,正如約翰.麥卡錫所指出的,人工智能的哲學“不太可能對人工智能研究的實踐產生比一般科學哲學對科學實踐更大的影響”。因此,我們將繼續研究有助於解決實際問題的系統,而不必問它們是否聰明。

Key terminology

General vs narrow AI

When reading the news, you might see the terms “general” and “narrow” AI. So what do these mean? Narrow AI refers to AI that handles one task. General AI, or Artificial General Intelligence (AGI) refers to a machine that can handle any intellectual task. All the AI methods we use today fall under narrow AI, with general AI being in the realm of science fiction. In fact, the ideal of AGI has been all but abandoned by the AI researchers because of lack of progress towards it in more than 50 years despite all the effort. In contrast, narrow AI makes progress in leaps and bounds.

Strong vs weak AI

A related dichotomy is “strong” and “weak” AI. This boils down to the above philosophical distinction between being intelligent and acting intelligently, which was emphasized by Searle. Strong AI would amount to a “mind” that is genuinely intelligent and self-conscious. Weak AI is what we actually have, namely systems that exhibit intelligent behaviors despite being “mere“ computers.

Exercise 4: Definitions, definitions

Which definition of AI do you like best? How would you define AI?

Let's first scrutinize the following definitions that have been proposed earlier:

  1. "cool things that computers can't do"
  2. machines imitating intelligent human behavior
  3. autonomous and adaptive systems

Your task:

  • Do you think these are good definitions? Consider each of them in turn and try to come up with things that they get wrong - either things that you think should be counted as AI but aren't according to the definition, or vice versa. Explain your answers by a few sentences per item (so just saying that all the definitions look good or bad isn't enough).
  • Also come up with your own, improved definition that solves some of the problems that you have identified with the above candidates. Explain with a few sentences how your definition may be better than the above ones.

Please read the above instructions carefully and answer both of the items above in the text box below. Your answer will be reviewed by other users and by the instructors. Please answer in English, and check your answer before clicking 'Submit' because once submitted, you can no longer edit your answer.

After completing Chapter 1 you should be able to:

Explain autonomy and adaptivity as key concepts for explaining AI

Distinguish between realistic and unrealistic AI (science fiction vs. real life)

Express the basic philosophical problems related to AI including the implications of the Turing test and Chinese room thought experiment


分享到:


相關文章: