雙語閱讀|谷歌在人工智慧方面遭遇更多抨擊

双语阅读|谷歌在人工智能方面遭遇更多抨击

DISCOVERING and harnessing fire unlocked more nutrition from food, feeding the bigger brains and bodies that are the hallmarks of modern humans. Google’s chief executive, Sundar Pichai, thinks his company’s development of artificial intelligence trumps that. “AI is one of the most important things that humanity is working on,” he told an event in California earlier this year. “It’s more profound than, I don’t know, electricity or fire.”

火的發現和利用使人們可以從食物中汲取更多營養,讓變大的大腦和身軀有營養,這是現代人的兩大特徵。谷歌首席執行官桑達爾·皮查伊(Sundar Pichai)則認為,谷歌在人工智能方面的發展超越了這一點。今年早些時候,他在加利福尼亞舉行的一場活動上說道:“人工智能是人類最重要的研究之一,或許比火和電更有意義。”

Hyperbolic analogies aside, Google’s AI techniques are becoming more powerful and more important to its business. But its use of AI is also generating controversy, both among its employees and the wider AI community.

拋開誇張的類比,谷歌的人工智能技術越來越強大,對業務的重要性越來越大。可是,谷歌對人工智能的運用在員工以及人工智能領域造成爭議。

One recent clash has centred on Google’s work with America’s Department of Defence (DoD). Under a contract signed in 2017 with the DoD, Google offers AI services, namely computer vision, to analyse military images. This might well improve the accuracy of strikes by military drones. Over the past month or so thousands of Google employees, including Jeff Dean, the firm’s AI chief, have signed a petition protesting against the work; at least 12 have resigned. On June 1st the boss of its cloud business, Diane Greene, conceded to those employees that the firm would not renew the contract when it expires next year.

近期的一次衝突焦點在谷歌與美國國防部的合作項目上。根據與美國國防部2017年簽訂的一份合同,谷歌向後者提供人工智能服務,即提供用於軍事圖像分析的計算機視覺技術。這個技術會極大地提升軍用無人機的打擊精度。然而,在過去的一個多月裡,包括谷歌人工智能負責人Jeff Dean在內的數千名谷歌員工簽署了反對該項工作的請願書,同時至少有12人離職。6月1日,谷歌雲業務的負責人Diane Greene向請願員工做出讓步,承諾在明年合同到期時不會續簽。

The tech giant also published a set of seven principles which it promises will guide its use of AI. These included statements that the technology should be “socially beneficial” and “built and tested for safety”. More interesting still was what Google said it would not do. It would “proceed only where we believe that the benefits substantially outweigh the risks,” it stated. It eschewed the future supply of AI services to power smart weapons or norm-violating surveillance techniques. It would, though, keep working with the armed forces in other capacities.

谷歌還發布了七條人工智能使用原則,其中包括“人工智能技術應當有益於社會”,“要進行安全的構建和測試”。更有趣的是谷歌所承諾不做的事。谷歌表示,“人工智能只會在我們認為利大於弊的情況下才會使用人工智能”。它不會在未來向智能武器或違反規定的監控技術提供人工智能服務,不過,它將繼續與軍隊在其他方面開展合作。

Google’s retreat comes partly because its AI talent hails overwhelmingly from the computer-science departments of American universities, notes Jeremy Howard, founder of Fast.ai, an AI research institute. Many bring liberal, anti-war views from academia with them, which can put them in direct opposition with the firm in some areas. Since AI talent is scarce, the firm has to pay heed to the principles of its boffins, at least to some extent.

人工智能研究所Fast.ai創始人Jeremy Howard表示,谷歌讓步的一個原因是其人工智能人才主要來自美國高校的計算機科學系。他們當中許多人保持著學院派的自由、反戰的觀點,使他們在某些領域與谷歌的觀點相左。由於人工智能人才稀缺,谷歌不得不遵循其研究人員的原則,至少在某種程度上是如此。

Military work is not the only sticking-point for Google’s use of AI. On June 7th a batch of patent applications made by DeepMind, a London-based sister company, were made public. The reaction was swift. Many warned that the patents would have a chilling effect on other innovators in the field. The patents have not yet been granted—indeed, they may not be—but the request flies in the face of the AI community’s accepted norms of openness and tech-sharing, says Miles Brundage, who studies AI policy at the University of Oxford. The standard defence offered on behalf of Google is that it does not have a history of patent abuse, and that it files them defensively in order to protect itself from future patent trolls. DeepMind’s patent strategy is understood to be chiefly defensive in nature.

Whatever Google’s intent, there are signs that the homogeneity of the AI community may lessen in future. New paths are being created to join the AI elite, other than a PhD in computer science. Hopefuls can take vocational courses offered by firms such as Udacity, an online-education firm; the tech giants also offer residencies to teach AI techniques to workers from different backgrounds. That might just lead to a less liberal, less vocal AI community. If so, such courses might serve corporate interests in more ways than one.

無論谷歌的目的是什麼,有跡象表明未來在人工智能領域達成的共識可能會減少。除了獲得計算機科學的博士學位,人們正在創造加入人工智能精英階層的新途徑。有希望的是,接受例如在線教育公司Udacity的職業課程;科技巨頭也將向不同背景的員工教授人工智能技術。這可能會減少人工智能領域的思想自由和言論自由。如若這樣,這一類的課程會以多種方式為企業利益服務。

編譯:賈毅榮


分享到:


相關文章: