Should we automate away all the jobs, including the fulfilling ones? Should we develop non-human minds that might eventually outnumber, outsmart...and replace us? Should we risk loss of control of our civilisation?” These questions were asked last month in an open letter from the Future of Life Institute, an ngo. It called for a six-month “pause” in the creation of the most advanced forms of artificial intelligence (ai), and was signed by tech luminaries including Elon Musk. It is the most prominent example yet of how rapid progress in ai has sparked anxiety about the potential dangers of the technology.

“我們應(yīng)該讓所有的工作實(shí)現(xiàn)自動(dòng)化嗎,包括讓人有成就感的工作?我們應(yīng)該發(fā)展最終可能在數(shù)量上超過我們、智力上勝過我們、并取代我們的非人類智慧嗎? 我們應(yīng)該冒著文明失控的風(fēng)險(xiǎn)嗎”?上個(gè)月,非政府組織“生命未來研究所”(Future of Life Institute)在一封公開信中提出了這些問題,呼吁在創(chuàng)造最先進(jìn)的人工智能方面“暫停”六個(gè)月,得到了包括埃隆·馬斯克在內(nèi)的科技名流的簽名。這是迄今為止最突出的例子,說明人工智能的飛速發(fā)展引發(fā)了人們對(duì)該技術(shù)潛在風(fēng)險(xiǎn)的擔(dān)憂。
原創(chuàng)翻譯:龍騰網(wǎng) http://www.top-shui.cn 轉(zhuǎn)載請(qǐng)注明出處


In particular, new “l(fā)arge language models” (llms)—the sort that powers Chatgpt, a chatbot made by Openai, a startup—have surprised even their creators with their unexpected talents as they have been scaled up. Such “emergent” abilities include everything from solving logic puzzles and writing computer code to identifying films from plot summaries written in emoji.

特別是新的“大型語言模型”(LLMS)——初創(chuàng)公司OpenAI開發(fā)的聊天機(jī)器人Chatgpt使用的那種驅(qū)動(dòng)模型——隨著規(guī)模的擴(kuò)大,其出乎意料的才能甚至使開發(fā)者感到驚訝。這種“突發(fā)”能力包括解決邏輯謎題、編寫計(jì)算機(jī)代碼,從表情符號(hào)敘述的故事梗概中識(shí)別電影等等。

These models stand to transform humans’ relationship with computers, knowledge and even with themselves. Proponents of ai argue for its potential to solve big problems by developing new drugs, designing new materials to help fight climate change, or untangling the complexities of fusion power. To others, the fact that ais’ capabilities are already outrunning their creators’ understanding risks bringing to life the science-fiction disaster scenario of the machine that outsmarts its inventor, often with fatal consequences.

這些模型很可能改變?nèi)祟惻c計(jì)算機(jī)、知識(shí)、甚至與自身的關(guān)系。人工智能的支持者認(rèn)為它有潛力解決重大問題,包括開發(fā)新藥物、設(shè)計(jì)新材料來幫助應(yīng)對(duì)氣候變化、梳理核聚變的復(fù)雜性。而對(duì)其他人來說,人工智能的能力已經(jīng)超出了發(fā)明者的理解力,這一事實(shí)可能使科幻小說中的災(zāi)難場景成為現(xiàn)實(shí),即機(jī)器比發(fā)明者更聰明,往往帶來致命性后果。

This bubbling mixture of excitement and fear makes it hard to weigh the opportunities and risks. But lessons can be learned from other industries, and from past technological shifts. So what has changed to make ai so much more capable? How scared should you be? And what should governments do?

這一興奮與恐懼并存的局面讓人很難權(quán)衡機(jī)會(huì)和風(fēng)險(xiǎn)。但是,我們可以從其他行業(yè)和以往的技術(shù)變革中吸取教訓(xùn)。那么,是什么變革使人工智能變得如此強(qiáng)大?你應(yīng)該有多擔(dān)心?政府應(yīng)該怎么做?
原創(chuàng)翻譯:龍騰網(wǎng) http://www.top-shui.cn 轉(zhuǎn)載請(qǐng)注明出處


In a special Science section, we explore the workings of llms and their future direction. The first wave of modern ai systems, which emerged a decade ago, relied on carefully labelled training data. Once exposed to a sufficient number of labelled examples, they could learn to do things like recognise images or transcribe speech. Today’s systems do not require pre-labelling, and as a result can be trained using much larger data sets taken from online sources. llms can, in effect, be trained on the entire internet—which explains their capabilities, good and bad.

在科學(xué)專題中,我們將探討大型語言模型(LLMS)的工作原理及其未來發(fā)展方向。10年前出現(xiàn)了第一批現(xiàn)代人工智能系統(tǒng),它們依賴于精心標(biāo)注的訓(xùn)練數(shù)據(jù)。一旦接觸到足夠數(shù)量的標(biāo)注示例,他們就能學(xué)會(huì)識(shí)別圖像或轉(zhuǎn)錄語音等事情。如今的人工智能系統(tǒng)不需要事先標(biāo)注,利用網(wǎng)絡(luò)上更大的數(shù)據(jù)集就能進(jìn)行訓(xùn)練。事實(shí)上,人們可以利用整個(gè)互聯(lián)網(wǎng)來訓(xùn)練大型語言模型(LLMS)——這就解釋了為什么他們的能力有好有壞。
原創(chuàng)翻譯:龍騰網(wǎng) http://www.top-shui.cn 轉(zhuǎn)載請(qǐng)注明出處


Those capabilities became apparent to a wider public when Chatgpt was released in November. A million people had used it within a week; 100m within two months. It was soon being used to generate school essays and wedding speeches. Chatgpt’s popularity, and Microsoft’s move to incorporate it into Bing, its search engine, prompted rival firms to release chatbots too.

Chatgpt于去年11月發(fā)布后,公眾意識(shí)到了它的這些能力。一周內(nèi)用戶達(dá)到100萬人;兩個(gè)月內(nèi)用戶達(dá)到1億人。很快,它就被用來撰寫學(xué)生作文和婚禮致辭。Chatgpt的火爆程度,以及微軟將其整合到Bing搜索引擎的舉動(dòng),促使競爭對(duì)手也推出了聊天機(jī)器人。

Some of these produced strange results. Bing Chat suggested to a journalist that he should leave his wife. Chatgpt has been accused of defamation by a law professor. llms produce answers that have the patina of truth, but often contain factual errors or outright fabrications. Even so, Microsoft, Google and other tech firms have begun to incorporate llms into their products, to help users create documents and perform other tasks.

其中一些聊天機(jī)器人產(chǎn)生了奇怪的結(jié)果。Bing Chat建議一位記者離開他的妻子。Chatgpt被一位法學(xué)教授指控誹謗。大型語言模型(LLMS)提供的答案有點(diǎn)兒真理的樣子,但往往包含事實(shí)錯(cuò)誤或徹頭徹尾的捏造。即便如此,微軟、谷歌和其他科技公司已開始將大型語言模型(LLMS)整合到他們的產(chǎn)品中,以幫助用戶創(chuàng)建文檔和執(zhí)行其他任務(wù)。
原創(chuàng)翻譯:龍騰網(wǎng) http://www.top-shui.cn 轉(zhuǎn)載請(qǐng)注明出處


The recent acceleration in both the power and visibility of ai systems, and growing awareness of their abilities and defects, have raised fears that the technology is now advancing so quickly that it cannot be safely controlled. Hence the call for a pause, and growing concern that ai could threaten not just jobs, factual accuracy and reputations, but the existence of humanity itself.

最近人工智能系統(tǒng)的能力和可見性都在加速發(fā)展,人們越來越認(rèn)識(shí)到它們的能力和缺陷,由此引發(fā)的擔(dān)憂是這項(xiàng)技術(shù)發(fā)展得這么快,會(huì)不會(huì)在安全上失去控制。所以才被叫停,人們越來越擔(dān)心人工智能不僅會(huì)威脅到就業(yè)、事實(shí)準(zhǔn)確性、名譽(yù),還會(huì)威脅到人類自身的生存。
原創(chuàng)翻譯:龍騰網(wǎng) http://www.top-shui.cn 轉(zhuǎn)載請(qǐng)注明出處


Extinction? Rebellion?

滅絕?叛亂?

The fear that machines will steal jobs is centuries old. But so far new technology has created new jobs to replace the ones it has destroyed. Machines tend to be able to perform some tasks, not others, increasing demand for people who can do the jobs machines cannot. Could this time be different? A sudden dislocation in job markets cannot be ruled out, even if so far there is no sign of one. Previous technology has tended to replace unskilled tasks, but llms can perform some white-collar tasks, such as summarising documents and writing code.

機(jī)器搶走人類飯碗的擔(dān)憂存在幾個(gè)世紀(jì)了。但迄今為止,新技術(shù)創(chuàng)造出新工作取代了被其淘汰的舊工作。機(jī)器往往能完成某項(xiàng)工作,但也有無能為力的時(shí)候,所以那些能為機(jī)器所不能為的人變得炙手可熱。這一次會(huì)有所不同嗎?不排除就業(yè)市場突然陷入混亂的可能,不過到目前為止尚未出現(xiàn)這種情況。以前的技術(shù)往往取代非技術(shù)類工作,但大型語言模型(LLMS)可以完成某些白領(lǐng)工作,例如總結(jié)文檔和編寫代碼。

The degree of existential risk posed by ai has been hotly debated. Experts are divided. In a survey of ai researchers carried out in 2022, 48% thought there was at least a 10% chance that ai’s impact would be “extremely bad (eg, human extinction)”. But 25% said the risk was 0%; the median researcher put the risk at 5%. The nightmare is that an advanced ai causes harm on a massive scale, by making poisons or viruses, or persuading humans to commit terrorist acts. It need not have evil intent: researchers worry that future ais may have goals that do not align with those of their human creators.

人工智能會(huì)給人類的生存帶來多大風(fēng)險(xiǎn)一直備受爭議。專家們對(duì)此意見不一。在2022年對(duì)人工智能研究人員進(jìn)行的一項(xiàng)調(diào)查中,48%的人認(rèn)為人工智能至少有10%的可能造成極其惡劣的影響 (例如人類滅絕)。但25%的人認(rèn)為風(fēng)險(xiǎn)為0%;中位數(shù)研究人員給出的風(fēng)險(xiǎn)為5%。噩夢是先進(jìn)的人工智能制造毒藥或病毒,或者說服人類進(jìn)行恐怖活動(dòng),從而造成大規(guī)模的傷害。人工智能不需要有邪惡意圖:研究人員擔(dān)心未來的人工智能可能會(huì)有與人類創(chuàng)造者不一致的目標(biāo)。
原創(chuàng)翻譯:龍騰網(wǎng) http://www.top-shui.cn 轉(zhuǎn)載請(qǐng)注明出處


Such scenarios should not be dismissed. But all involve a huge amount of guesswork, and a leap from today’s technology. And many imagine that future ais will have unfettered access to energy, money and computing power, which are real constraints today, and could be denied to a rogue ai in future. Moreover, experts tend to overstate the risks in their area, compared with other forecasters. (And Mr Musk, who is launching his own ai startup, has an interest in his rivals downing tools.) Imposing heavy regulation, or indeed a pause, today seems an over-reaction. A pause would also be unenforceable.

這種情況不應(yīng)被忽視。但所有這些都是基于大量的猜測,以及當(dāng)今技術(shù)將出現(xiàn)巨大的飛躍。許多人想象,未來的人工智能將無限地獲取能源、資金、算力,但這些在當(dāng)今是完全受限的,未來可以拒絕向流氓人工智能提供這些資源。而且與其他預(yù)測者相比,專家們往往夸大他們所在領(lǐng)域的風(fēng)險(xiǎn)。(馬斯克正在創(chuàng)辦自己的人工智能初創(chuàng)公司,競爭對(duì)手的叫停引起了他的興趣。)無論實(shí)施嚴(yán)格的監(jiān)管,還是真正暫停研發(fā),現(xiàn)在似乎都是過度反應(yīng),而且叫停無法被強(qiáng)制執(zhí)行。

Regulation is needed, but for more mundane reasons than saving humanity. Existing ai systems raise real concerns about bias, privacy and intellectual-property rights. As the technology advances, other problems could become apparent. The key is to balance the promise of ai with an assessment of the risks, and to be ready to adapt.

監(jiān)管是必要的,但主要是出于現(xiàn)實(shí)的原因,而不是拯救人類。現(xiàn)在的人工智能系統(tǒng)引發(fā)的真正擔(dān)憂是偏見、隱私、知識(shí)產(chǎn)權(quán)。隨著技術(shù)的進(jìn)步,可能會(huì)顯現(xiàn)其他問題。關(guān)鍵是在人工智能的前景與風(fēng)險(xiǎn)評(píng)估之間取得平衡,并做好適應(yīng)的準(zhǔn)備。

So far governments are taking three different approaches. At one end of the spectrum is Britain, which has proposed a “l(fā)ight-touch” approach with no new rules or regulatory bodies, but applies existing regulations to ai systems. The aim is to boost investment and turn Britain into an “ai superpower”. America has taken a similar approach, though the Biden administration is now seeking public views on what a rulebook might look like.

到目前為止,各國政府采取了三種不同的模式。英國采取了溫和模式,沒有新增法規(guī)或監(jiān)管機(jī)構(gòu),而是利用現(xiàn)行法規(guī)來監(jiān)管人工智能系統(tǒng),旨在促進(jìn)投資,使英國成為“人工智能超級(jí)大國”。美國采取了類似的做法,但拜登政府正在就相關(guān)法規(guī)的制定征求民意。

The eu is taking a tougher line. Its proposed law categorises different uses of ai by the degree of risk, and requires increasingly stringent monitoring and disclosure as the degree of risk rises from, say, music-recommendation to self-driving cars. Some uses of ai are banned altogether, such as subliminal advertising and remote biometrics. Firms that break the rules will be fined. For some critics, these regulations are too stifling.

歐盟采取了比較強(qiáng)硬的方針。它提出的法律根據(jù)風(fēng)險(xiǎn)程度對(duì)人工智能的各種用途進(jìn)行分類,并且要求隨著風(fēng)險(xiǎn)程度的增加,例如從音樂推薦服務(wù)到無人駕駛汽車,實(shí)行越來越嚴(yán)格的監(jiān)管和信息披露。人工智能的某些用途是完全被禁止的,例如潛意識(shí)廣告和遠(yuǎn)程生物識(shí)別,違反規(guī)定的公司將受到罰款。有些批評(píng)者認(rèn)為,這些規(guī)定太令人窒息了。

But others say an even sterner approach is needed. Governments should treat ai like medicines, with a dedicated regulator, strict testing and pre-approval before public release. China is doing some of this, requiring firms to register ai products and undergo a security review before release. But safety may be less of a motive than politics: a key requirement is that ais’ output reflects the “core value of socialism”.

但也有人認(rèn)為,應(yīng)該采取更嚴(yán)厲的措施。政府應(yīng)該像對(duì)待藥物一樣對(duì)待人工智能,有專門的監(jiān)管機(jī)構(gòu),嚴(yán)格的測試、公開發(fā)布前的預(yù)批。中國就在這樣做,要求公司在發(fā)布人工智能產(chǎn)品之前進(jìn)行注冊和安全審查。但安全性可能主要出于政治上的考慮:一個(gè)核心要求是人工智能的輸出應(yīng)該反映“社會(huì)主義的核心價(jià)值觀”。
原創(chuàng)翻譯:龍騰網(wǎng) http://www.top-shui.cn 轉(zhuǎn)載請(qǐng)注明出處


What to do? The light-touch approach is unlikely to be enough. If ai is as important a technology as cars, planes and medicines—and there is good reason to believe that it is—then, like them, it will need new rules. Accordingly, the eu’s model is closest to the mark, though its classification system is overwrought and a principles-based approach would be more flexible. Compelling disclosure about how systems are trained, how they operate and how they are monitored, and requiring inspections, would be comparable to similar rules in other industries.

那該怎么辦?溫和模式可能力度不夠。如果人工智能與汽車、飛機(jī)、藥物一樣重要的話(我們有充分的理由相信這一點(diǎn))那么人工智能和它們一樣需要新的法規(guī)。因此,歐盟的模式是最接近目標(biāo)的,只是它的分類體系過于繁雜,采取基于原則的模式會(huì)比較靈活。強(qiáng)制披露有關(guān)人工智能系統(tǒng)的訓(xùn)練方式、工作原理、監(jiān)控方式,并要求審查,這與其他行業(yè)的類似規(guī)則差不多。

This could allow for tighter regulation over time, if needed. A dedicated regulator may then seem appropriate; so too may intergovernmental treaties, similar to those that govern nuclear weapons, should plausible evidence emerge of existential risk. To monitor that risk, governments could form a body modelled on cern, a particle-physics laboratory, that could also study ai safety and ethics—areas where companies lack incentives to invest as much as society might wish.

隨著時(shí)間的推移,必要時(shí)可以實(shí)施更嚴(yán)格的監(jiān)管。成立一個(gè)專門的監(jiān)管機(jī)構(gòu)可能是合適的舉措;如果有人類生存面臨風(fēng)險(xiǎn)的確鑿證據(jù),各國也可以簽署政府間條約(類似于那些規(guī)范核武器的條約)。為了監(jiān)控這種風(fēng)險(xiǎn),各國政府還可以效仿歐洲核子研究組織(一座粒子物理實(shí)驗(yàn)室)成立一個(gè)機(jī)構(gòu),該機(jī)構(gòu)還可以研究人工智能的安全和倫理——企業(yè)對(duì)這些領(lǐng)域的投資動(dòng)力不足,投資力度達(dá)不到社會(huì)的期待。
原創(chuàng)翻譯:龍騰網(wǎng) http://www.top-shui.cn 轉(zhuǎn)載請(qǐng)注明出處


This powerful technology poses new risks, but also offers extraordinary opportunities. Balancing the two means treading carefully. A measured approach today can provide the foundations on which further rules can be added in future. But the time to start building those foundations is now.

這項(xiàng)強(qiáng)大的技術(shù)帶來了新的風(fēng)險(xiǎn),但也提供了非凡的機(jī)遇,在兩者之間取得平衡就得小心翼翼?,F(xiàn)在采取慎重的模式能為將來增添規(guī)則打下基礎(chǔ),但現(xiàn)在是時(shí)候開始打基礎(chǔ)了。