双语阅读|谷歌在人工智能方面遭遇更多抨击

双语阅读|谷歌在人工智能方面遭遇更多抨击

DISCOVERING and harnessing fire unlocked more nutrition from food, feeding the bigger brains and bodies that are the hallmarks of modern humans. Google’s chief executive, Sundar Pichai, thinks his company’s development of artificial intelligence trumps that. “AI is one of the most important things that humanity is working on,” he told an event in California earlier this year. “It’s more profound than, I don’t know, electricity or fire.”

火的发现和利用使人们可以从食物中汲取更多营养,让变大的大脑和身躯有营养,这是现代人的两大特征。谷歌首席执行官桑达尔·皮查伊(Sundar Pichai)则认为,谷歌在人工智能方面的发展超越了这一点。今年早些时候,他在加利福尼亚举行的一场活动上说道:“人工智能是人类最重要的研究之一,或许比火和电更有意义。”

Hyperbolic analogies aside, Google’s AI techniques are becoming more powerful and more important to its business. But its use of AI is also generating controversy, both among its employees and the wider AI community.

抛开夸张的类比,谷歌的人工智能技术越来越强大,对业务的重要性越来越大。可是,谷歌对人工智能的运用在员工以及人工智能领域造成争议。

One recent clash has centred on Google’s work with America’s Department of Defence (DoD). Under a contract signed in 2017 with the DoD, Google offers AI services, namely computer vision, to analyse military images. This might well improve the accuracy of strikes by military drones. Over the past month or so thousands of Google employees, including Jeff Dean, the firm’s AI chief, have signed a petition protesting against the work; at least 12 have resigned. On June 1st the boss of its cloud business, Diane Greene, conceded to those employees that the firm would not renew the contract when it expires next year.

近期的一次冲突焦点在谷歌与美国国防部的合作项目上。根据与美国国防部2017年签订的一份合同,谷歌向后者提供人工智能服务,即提供用于军事图像分析的计算机视觉技术。这个技术会极大地提升军用无人机的打击精度。然而,在过去的一个多月里,包括谷歌人工智能负责人Jeff Dean在内的数千名谷歌员工签署了反对该项工作的请愿书,同时至少有12人离职。6月1日,谷歌云业务的负责人Diane Greene向请愿员工做出让步,承诺在明年合同到期时不会续签。

The tech giant also published a set of seven principles which it promises will guide its use of AI. These included statements that the technology should be “socially beneficial” and “built and tested for safety”. More interesting still was what Google said it would not do. It would “proceed only where we believe that the benefits substantially outweigh the risks,” it stated. It eschewed the future supply of AI services to power smart weapons or norm-violating surveillance techniques. It would, though, keep working with the armed forces in other capacities.

谷歌还发布了七条人工智能使用原则,其中包括“人工智能技术应当有益于社会”,“要进行安全的构建和测试”。更有趣的是谷歌所承诺不做的事。谷歌表示,“人工智能只会在我们认为利大于弊的情况下才会使用人工智能”。它不会在未来向智能武器或违反规定的监控技术提供人工智能服务,不过,它将继续与军队在其他方面开展合作。

Google’s retreat comes partly because its AI talent hails overwhelmingly from the computer-science departments of American universities, notes Jeremy Howard, founder of Fast.ai, an AI research institute. Many bring liberal, anti-war views from academia with them, which can put them in direct opposition with the firm in some areas. Since AI talent is scarce, the firm has to pay heed to the principles of its boffins, at least to some extent.

人工智能研究所Fast.ai创始人Jeremy Howard表示,谷歌让步的一个原因是其人工智能人才主要来自美国高校的计算机科学系。他们当中许多人保持着学院派的自由、反战的观点,使他们在某些领域与谷歌的观点相左。由于人工智能人才稀缺,谷歌不得不遵循其研究人员的原则,至少在某种程度上是如此。

Military work is not the only sticking-point for Google’s use of AI. On June 7th a batch of patent applications made by DeepMind, a London-based sister company, were made public. The reaction was swift. Many warned that the patents would have a chilling effect on other innovators in the field. The patents have not yet been granted—indeed, they may not be—but the request flies in the face of the AI community’s accepted norms of openness and tech-sharing, says Miles Brundage, who studies AI policy at the University of Oxford. The standard defence offered on behalf of Google is that it does not have a history of patent abuse, and that it files them defensively in order to protect itself from future patent trolls. DeepMind’s patent strategy is understood to be chiefly defensive in nature.

Whatever Google’s intent, there are signs that the homogeneity of the AI community may lessen in future. New paths are being created to join the AI elite, other than a PhD in computer science. Hopefuls can take vocational courses offered by firms such as Udacity, an online-education firm; the tech giants also offer residencies to teach AI techniques to workers from different backgrounds. That might just lead to a less liberal, less vocal AI community. If so, such courses might serve corporate interests in more ways than one.

无论谷歌的目的是什么,有迹象表明未来在人工智能领域达成的共识可能会减少。除了获得计算机科学的博士学位,人们正在创造加入人工智能精英阶层的新途径。有希望的是,接受例如在线教育公司Udacity的职业课程;科技巨头也将向不同背景的员工教授人工智能技术。这可能会减少人工智能领域的思想自由和言论自由。如若这样,这一类的课程会以多种方式为企业利益服务。

编译:贾毅荣


分享到:


相關文章: