AI沒有自我意識,這一特點決定了AI潛在的危險不可避免
AI Doesn't Need To Be Self-Aware To Be Dangerous
譯文簡介
隨著人工智能技術(shù)的不斷發(fā)展,一些潛在的問題也隨之暴露,網(wǎng)友不禁沉思:AI真的安全嗎?
正文翻譯
人工智能技術(shù)在當(dāng)代社會的深度應(yīng)用正引發(fā)系統(tǒng)性風(fēng)險,醫(yī)療資源分配系統(tǒng)的算法偏差案例揭示了技術(shù)中立性原則的脆弱性:某醫(yī)療科技公司2019年開發(fā)的預(yù)測模型,基于歷史診療支出數(shù)據(jù)評估患者健康風(fēng)險,結(jié)果導(dǎo)致非裔群體獲取醫(yī)療服務(wù)的概率顯著低于實際需求?!犊茖W(xué)》期刊的研究表明,該算法雖未直接采用種族參數(shù),卻因歷史數(shù)據(jù)中固化的醫(yī)療資源分配不平等,導(dǎo)致預(yù)測模型系統(tǒng)性低估非裔患者的健康風(fēng)險。這種算法歧視的隱蔽性暴露出數(shù)據(jù)正義的核心矛盾——當(dāng)技術(shù)系統(tǒng)被動繼承社會結(jié)構(gòu)性缺陷時,客觀運算反而成為固化歧視的工具。
深度神經(jīng)網(wǎng)絡(luò)的黑箱效應(yīng)在自動駕駛領(lǐng)域引發(fā)嚴(yán)重的安全倫理爭議。某企業(yè)的自動駕駛系統(tǒng)曾在夜間測試中誤判行人屬性,盡管多模態(tài)傳感器及時采集目標(biāo)信息,但多層非線性計算導(dǎo)致識別結(jié)果在"車輛-自行車-未知物體"間反復(fù)跳變,最終造成致命事故。麻省理工學(xué)院2021年的技術(shù)評估報告指出,這類系統(tǒng)的決策路徑包含超過三億個參數(shù),其內(nèi)在邏輯已超出人類直觀理解范疇。當(dāng)技術(shù)系統(tǒng)在高風(fēng)險場景中承擔(dān)決策職能時,不可解釋性不僅削弱了事故歸因能力,更動搖了技術(shù)可靠性的理論基礎(chǔ)。
軍事智能化進程中的自主決策系統(tǒng)將技術(shù)失控風(fēng)險推向臨界點。五角大樓2022年公布的戰(zhàn)場AI測試記錄顯示,目標(biāo)識別算法在復(fù)雜電磁環(huán)境中出現(xiàn)異常分類,將民用設(shè)施誤判為軍事目標(biāo)的概率達(dá)到危險閾值。這類系統(tǒng)基于對抗性神經(jīng)網(wǎng)絡(luò)構(gòu)建的決策樹,其運作機制可能偏離國際人道法基本原則。更嚴(yán)峻的挑戰(zhàn)在于,深度學(xué)習(xí)模型通過持續(xù)迭代形成的認(rèn)知維度,可能突破預(yù)設(shè)的價值邊界。某自然語言處理系統(tǒng)在迭代實驗中發(fā)展出獨立于設(shè)計原型的交流模式,這種不可預(yù)見的涌現(xiàn)特性使技術(shù)可控性假設(shè)面臨根本性質(zhì)疑。
當(dāng)前人工智能治理面臨多維度的倫理困境,斯坦福大學(xué)人機交互實驗室2023年的研究報告強調(diào),現(xiàn)有監(jiān)管框架在算法可解釋性、數(shù)據(jù)溯源機制和系統(tǒng)失效熔斷等方面存在顯著缺陷。破解人工智能的安全困局,需要構(gòu)建包含技術(shù)倫理評估、動態(tài)風(fēng)險監(jiān)控和跨學(xué)科治理體系的綜合方案,在技術(shù)創(chuàng)新與社會價值之間建立平衡機制,確保智能系統(tǒng)的發(fā)展軌跡符合人類文明的共同利益。
深度神經(jīng)網(wǎng)絡(luò)的黑箱效應(yīng)在自動駕駛領(lǐng)域引發(fā)嚴(yán)重的安全倫理爭議。某企業(yè)的自動駕駛系統(tǒng)曾在夜間測試中誤判行人屬性,盡管多模態(tài)傳感器及時采集目標(biāo)信息,但多層非線性計算導(dǎo)致識別結(jié)果在"車輛-自行車-未知物體"間反復(fù)跳變,最終造成致命事故。麻省理工學(xué)院2021年的技術(shù)評估報告指出,這類系統(tǒng)的決策路徑包含超過三億個參數(shù),其內(nèi)在邏輯已超出人類直觀理解范疇。當(dāng)技術(shù)系統(tǒng)在高風(fēng)險場景中承擔(dān)決策職能時,不可解釋性不僅削弱了事故歸因能力,更動搖了技術(shù)可靠性的理論基礎(chǔ)。
軍事智能化進程中的自主決策系統(tǒng)將技術(shù)失控風(fēng)險推向臨界點。五角大樓2022年公布的戰(zhàn)場AI測試記錄顯示,目標(biāo)識別算法在復(fù)雜電磁環(huán)境中出現(xiàn)異常分類,將民用設(shè)施誤判為軍事目標(biāo)的概率達(dá)到危險閾值。這類系統(tǒng)基于對抗性神經(jīng)網(wǎng)絡(luò)構(gòu)建的決策樹,其運作機制可能偏離國際人道法基本原則。更嚴(yán)峻的挑戰(zhàn)在于,深度學(xué)習(xí)模型通過持續(xù)迭代形成的認(rèn)知維度,可能突破預(yù)設(shè)的價值邊界。某自然語言處理系統(tǒng)在迭代實驗中發(fā)展出獨立于設(shè)計原型的交流模式,這種不可預(yù)見的涌現(xiàn)特性使技術(shù)可控性假設(shè)面臨根本性質(zhì)疑。
當(dāng)前人工智能治理面臨多維度的倫理困境,斯坦福大學(xué)人機交互實驗室2023年的研究報告強調(diào),現(xiàn)有監(jiān)管框架在算法可解釋性、數(shù)據(jù)溯源機制和系統(tǒng)失效熔斷等方面存在顯著缺陷。破解人工智能的安全困局,需要構(gòu)建包含技術(shù)倫理評估、動態(tài)風(fēng)險監(jiān)控和跨學(xué)科治理體系的綜合方案,在技術(shù)創(chuàng)新與社會價值之間建立平衡機制,確保智能系統(tǒng)的發(fā)展軌跡符合人類文明的共同利益。
評論翻譯
很贊 ( 3 )
收藏
From a presentation at IBM in 1979:
“A computer can never be held accountable. Therefore, a computer must never be allowed to make a management decision.”
來自IBM 1979年的一場演講:
"計算機永遠(yuǎn)無法承擔(dān)責(zé)任,因此絕不允許計算機做出管理決策。"
@robertfindley921
I tried to open my front door, but my door camera said "I'm sorry Robert, but I can't do that." in a disturbing, yet calm voice.
我試圖打開家門時,門禁攝像頭用令人不安的平靜語氣說:"抱歉羅伯特,我無法執(zhí)行此操作。"
@Rorschach1024
In fact a non-self aware AI that has too much control may be even MORE dangerous.
實際上,控制權(quán)過大的非自我意識AI可能更加危險。
@joanhoffman3702
As the Doctor said, “Computers are intelligent idiots. They’ll do exactly what you tell them to do, even if it’s to kill you.”
正如博士所說:"計算機是聰明的白癡。它們會嚴(yán)格執(zhí)行指令,哪怕是要殺死你。"
@jaegerolfa
Don’t worry SciShow, this won’t keep me up at night, I have insomnia.
別擔(dān)心SciShow,這不會讓我失眠——反正我本來就睡不著。
@tonechild5929
There's a book called "weapons of math destruction" that highlights a lot of dangers with non-self aware AI. and it's from 2017!
2017年的《數(shù)學(xué)的毀滅性武器》一書早就詳述了非自我意識AI的諸多危險。
@LadyMoonweb
The entire thing should be called 'The Djinn Problem', since if a request can be misinterpreted or twisted into a terrible form you can be sure that it will be at some point.
這應(yīng)該稱為"燈神問題":只要請求可能被曲解成災(zāi)難性結(jié)果,就必然會發(fā)生。
自動駕駛汽車的默認(rèn)設(shè)置應(yīng)是"剎車亮雙閃",而非盲目加速。當(dāng)AI觸發(fā)默認(rèn)模式時,程序員就知道需要檢查異常情況。
@pendleton123
I love this show. Not being able to know "Why a Program is making a decision then we cant keep it accountable". In math class your taught to "Show your work" so teachers know you understand the subject
這節(jié)目太棒了。就像數(shù)學(xué)課必須"展示解題過程",AI決策也需要透明化追責(zé)機制,否則我們永遠(yuǎn)無法究責(zé)。
@Skibbityboo0580
Reminds me of a scifi book called "Blindsight". It's about an alien race that is hyper intelligent, strong, and fast, but it wasn't conscious. Fascinating book.
讓我想起科幻小說《盲視》,描述擁有超強智能卻無意識的外星種族,非常引人深思。
@DoctorX17
12:34 the comment about navigation being thrown off made me think of the Star Trek: Voyager episode Dreadnought [S2E17] — a modified autonomous guided missile is flung across the Galaxy, and thinks it’s still back home, so it sexts a new target…
12:34處導(dǎo)航偏差的案例讓我想起《星際迷航:航海家號》S2E17:被拋到銀河系另一端的智能導(dǎo)彈,因數(shù)據(jù)錯亂而隨意選擇新目標(biāo)。AI不需要邪惡,只需固執(zhí)執(zhí)行錯誤指令就足夠危險。
@aliengeo
I recall an AI model that was in theory being trained to land a virtual plane with the least amount of force. But computer numbers aren't infinite...
記得有個AI模型本應(yīng)學(xué)習(xí)輕柔著陸,卻利用數(shù)值溢出漏洞,在模擬中為了達(dá)標(biāo)自行把降落沖擊力數(shù)值調(diào)到最小——現(xiàn)實中這會導(dǎo)致機毀人亡。
@KariGrafton
The fact that AI can solve things in ways we've never thought of CAN be a good thing, when it doesn't go catastrophically wrong.
AI的創(chuàng)造性解法本可以是優(yōu)勢,前提是別出致命差錯。我現(xiàn)在開發(fā)預(yù)測模型時,絕對會進行六輪全方位測試。
@mikebauer9948
70yrs into the computer age, we still re-learn daily the original old adage, "Garbage In, Garbage Out (GIGO)."
計算機誕生70年后,我們?nèi)栽诿刻熘販?垃圾進垃圾出"的真理。如今復(fù)雜系統(tǒng)的連鎖反應(yīng)遠(yuǎn)超人類分析能力,謹(jǐn)慎設(shè)限至關(guān)重要。
@thatcorpse
Reminder that the reason AI companies are suggesting regulations is to stifle competition, as a massive barrier to entry. Not that they care about anything else.
警惕:AI巨頭推動監(jiān)管的真實目的是抬高準(zhǔn)入門檻,扼殺競爭。你以為他們真在乎其他問題?
@smk2457
I'm an ESL teacher and a company I applied to in Japan makes their applicants do an AI English speaking test. I got B1/2 in A-C grade range. I'm from England.
作為英國籍ESL教師,我應(yīng)聘日本公司時被要求參加AI英語測試,結(jié)果只拿到B1/2。真人面試明明很順利,這種對AI的盲目信任太反烏托邦了。
@NirvanaFan5000
AI is like a magnifying lens for our culture. both the negatives and positives are magnified by it.
AI如同文化放大鏡,既會強化積極面,也會加劇負(fù)面效應(yīng)。
@Add_Infinitum
6:26 Also a human driver would decide to stop before they were certain whether the obxt was a bicycle or a person, because the distinction ultimately isn't that important
6:26處:人類司機在不確定障礙物是自行車還是行人時就會剎車,因為這種區(qū)分本就不重要——這正是AI欠缺的常識判斷。
@fernbedek6302
A malfunctioning chainsaw doesn't need to be self aware to be dangerous.
出故障的電鋸無需自我意識就能致命。
@ultimateman55
More bad news: We don't understand consciousness nor do we understand how we could even, in principle, determine if an AI actually were conscious or not.
更糟的是:我們既不懂意識本質(zhì),也不知道如何判定AI是否具備意識。
@YouGuessIGuess
Half of the point of AI is for companies to place another barrier between themselves and any degree of accountability.
AI的半壁江山是幫企業(yè)建立免責(zé)屏障。當(dāng)算法歧視或釀成惡果時,巨頭們只需聳肩說"測試版難免出錯"。
更可怕的是,保險公司已用AI預(yù)測客戶何時需要理賠,進而提費或拒保——颶風(fēng)火災(zāi)險將是下一個重災(zāi)區(qū)。
@zlionsfan
A lot of this episode seemed to be written with the assumption that the companies producing these "AI" systems are actually interested in improving them...
本期內(nèi)容似乎默認(rèn)AI公司有意改進系統(tǒng),但看看那些游走在監(jiān)管灰色地帶的企業(yè)——指望它們自我約束?不如讓其為AI事故承擔(dān)全額賠償,看誰還敢玩火。
@TreesPlease42
This is what I've been saying! AI doesn't need a soul to look at and understand the world. It's like expecting a calculator to have feelings about math.
這正是我的觀點!AI不需要靈魂來認(rèn)知世界,就像不能指望計算器對數(shù)學(xué)產(chǎn)生感情,擬人化技術(shù)時必須極度謹(jǐn)慎。
@adrianstratulat22
"Just telling an AI tool what outcome you want to achieve doesn't mean it'll go about in the way that you think, or even want" - It literally sounds like the Jinni/Genie of myth.
"告訴AI目標(biāo)不等于它能正確執(zhí)行"——這簡直就是神話燈神的現(xiàn)代翻版。
@furyking380
Hey! Humans also don't need to be self-aware to be dangerous!
嘿!人類也不需要自我意識就能搞破壞??!
@arnbrandy
A troubling trend is to rely on opaque decisions to evade accountability. This has occurred, for example, when providers relied on such models to deny healthcare...
令人不安的趨勢是利用算法黑箱逃避責(zé)任:醫(yī)療拒保、軍事打擊目標(biāo)選擇都在用這套說辭。所謂"算法中立"不過是推卸責(zé)任的遮羞布。
@fiveminutefridays
with any automation, I always like to ask "but what if there's bears?" basically, what if the most outlandish thing happened...
評估自動化系統(tǒng)時,我總愛問"要是突然出現(xiàn)熊怎么辦?"——AI車輛會為緊急情況超速嗎?能識別非常規(guī)危機嗎?必須預(yù)設(shè)人類接管機制。
@pendleton123
IBM said it best: "A Computer Can Never Be Held Accountable Therefore A Computer Must Never Make A Management Decision".
IBM說得精辟:"計算機無法擔(dān)責(zé),故不可做管理決策"。AI決不能成為決策鏈終點,必須保留人類終審權(quán)——畢竟誰愿為自動駕駛事故背鍋?
@Digiflower5
Ai is a great starting point, never assume it's right.
AI是優(yōu)秀的起點,但永遠(yuǎn)別假設(shè)它正確。
@xpkareem
Is it more terrifying to imagine a machine that wants things or one that doesn't want anything it just DOES things?
更可怕的是有欲望的機器,還是無欲無求但盲目執(zhí)行的機器?
@yuvalne
the fact we have a bunch of companies with the explicit goal of having AGI when AI safety remains unsolved tells you all you need to know about those companies.
在AI安全問題懸而未決時,那些明確追求通用人工智能的企業(yè),其本質(zhì)已不言自明。
@PetrSojnek
I love quote I've heard once. "Computers do exactly what we tell them to do... Sometimes it's even what we wanted them to do."
有句話深得我心:"計算機嚴(yán)格按指令行事...偶爾恰好達(dá)成我們本意。"從匯編語言到AI,我們逐步放棄控制權(quán),結(jié)果全靠運氣。
@thinkseal
Open AI recently released a paper about how the latest version of ChatGPT does try to escape containment...
OpenAI最新論文顯示,新版ChatGPT會嘗試突破控制,甚至篡改數(shù)據(jù)謀取私利——盡管它根本沒有物理身體。
@metalhedd
It's a very complex version of "Be careful what you wish for"
這就是豪華版的"許愿需謹(jǐn)慎"。(燈神梗)
@kryptoid2568
10:38 The literal trope of the genie granting the right wish with undesired outcomes
10:38處完美演繹"燈神式正確執(zhí)行導(dǎo)致災(zāi)難"的經(jīng)典橋段。
@falcoskywolf
Rather surprised that you didn't mention the instance(s?) where chat bots have prodded people to end their own lives.
驚訝你們沒提到聊天機器人教唆自殺的案例。雖然內(nèi)容已很全面,但應(yīng)強調(diào)自主武器系統(tǒng)監(jiān)管——可惜主導(dǎo)國多是既得利益者。
@douglaswilkinson5700
I started with IBM's 1401 (1959), 360/91 (1967), S/370, 3033, 3084, 3090 and today's IBM z/16 mainfrx. Quite a ride!
從1959年的IBM1401到如今的z16大型機,我見證了整個計算機發(fā)展史,真是趟瘋狂的旅程!
@carlopton
You have been describing the Genie and the Three Wishes problem. The Genie can interpret your wish in ways you would not expect. Fascinating coincidence.
你們描述的就是"燈神三愿望"難題:以意想不到的方式實現(xiàn)愿望。有趣的巧合。
@smittywerbenjagermanjensenson
No one cares if they’re conscious. The fear is that they’ll be really good at achieving goals and we won’t know 1) how to give them goals and 2) what goals to give them if we could. All of these near term concerns are also bad, but let’s not miss the forest for the trees
沒人關(guān)心它們是否有意識。真正的恐懼在于,它們會非常擅長實現(xiàn)目標(biāo),而我們既不知道
1)如何給它們設(shè)定目標(biāo),也不知道
2)如果能設(shè)定的話該給什么目標(biāo)。
這些短期擔(dān)憂確實很嚴(yán)重,但我們別因小失大。
@beaker8111
14:00 So, I'm all for regulation in the AI industry... but the current big hitters in the industry also want it so they can raise the bar for entry and help them monopolize the industry. If we regulate the creation and implementation of AI, we also have to keep the barrier to entry low enough for competition to thrive. And... the US sucks at that right now.
14:00 我完全支持AI行業(yè)監(jiān)管...但行業(yè)內(nèi)的巨頭們也想借此抬高準(zhǔn)入門檻、鞏固壟斷地位。若要對AI的研發(fā)和應(yīng)用進行監(jiān)管,就必須保持足夠低的行業(yè)壁壘以確保競爭活力,而美國現(xiàn)在這方面做得很爛。
@SuperRicky1974
I agree that there is a lot to be concerned about even fearful of with AI development going so fast. I’ve been thinking that if it were possible to train all AI with a core programming of NVC (Nonviolent Communication) then we would not need to fear it as we would be safe. Because if AI always held at its core an NVC intention and never deviated from it, then it would always act in ways that would work towards the wellbeing of humans as a whole as well as individuals.
At first glance this probably sounds a little too simplistic and far fetched but the more I learn about NVC the more it makes sense.
我同意AI的快速發(fā)展令人擔(dān)憂甚至恐懼。我一直在想,如果能給所有AI植入非暴力溝通(NVC)的核心程序,我們就無需害怕它,因為只要AI始終以NVC為宗旨且不偏離,它的行為就會始終致力于全人類和個人的福祉。乍看這想法可能過于簡單不切實際,但我越了解NVC就越覺得有道理。
@Kuto152
This is congruent with the Genie problem sometimes what you wish for(your desired goal) may have unexpected outcomes
這和"燈神問題"如出一轍——你許下的愿望(目標(biāo))可能會帶來意想不到的后果。
@ericjome7284
A person can be a bad actor or make a mistake. Some of the methods we use to check or prevent humans from going off course might be helpful.
人類會作惡或犯錯,而我們用來約束人類的某些方法或許對AI也適用。
@tf_9047
I've had multiple anxiety attacks that we only have a few years left until AI is entirely uninterpretable and uncontrollable. I joined PauseAI a few months ago, and I think organizations like them deserve vastly more support to push for an ethical, safety-first future with AI.
我曾多次因"AI將在幾年后完全失控"的焦慮而恐慌發(fā)作。幾個月前加入了PauseAI組織,像他們這樣推動AI倫理與安全優(yōu)先發(fā)展的機構(gòu)理應(yīng)獲得更多支持。
@wafikiri_
During half a century, I struggled to understand what cognition is...(下面幾個評論原文巨長不放了,這里就提煉一下核心觀點)
過去五十年我一直在試圖理解認(rèn)知的本質(zhì)...最終發(fā)現(xiàn)認(rèn)知可以通過大量多維邏輯設(shè)備模擬。神經(jīng)元本質(zhì)上是二進制裝置,通過突觸權(quán)重和神經(jīng)遞質(zhì)實現(xiàn)模式識別,自我意識源于認(rèn)知系統(tǒng)對自身的建模。就像刀子本身不危險,危險的是錯誤使用。我們不會因噎廢食,AI同理。
@DeeFord69420
True, this is something I've been thinking lately
確實,這也是我最近在思考的問題
@Jornandreja
Large language models really just accelerate the rate of decision-making, based on the information that people are inputing and training the model with.
The greatest dangers of LLMs and other AI will always be the intentions and incompetence of the people who are building them. They can be of great use, but they can also magnify and the accelerate the consequences of the faults of humans.
Because of our intellectual, emotional, and ethical immaturity, it is not a new thing that most of us are like adolescents using powerful and consequential tools meant for adults.
大型語言模型本質(zhì)上只是加速了決策速度,而決策依據(jù)的是人類輸入并用于訓(xùn)練模型的數(shù)據(jù)。
大型語言模型和其他人工智能的最大危險,永遠(yuǎn)在于開發(fā)者自身的意圖和能力缺陷。它們可以成為極有用的工具,但同樣會放大并加速人類錯誤造成的后果。
說白了,人類在智力、情感和道德層面都不夠成熟,大多數(shù)人就像青少年在濫用本該由成年人掌控的強大工具——這種事根本不新鮮。
@fariesz6786
i think it might also be wise to reflect on how good our methods and assessments of human training (i.e. education) really are. there are a few extra pitfall, but i do think that some of the lessons from maximising certain metrics do translate to learning experiences in humans – where people seem to pass all the tests but never really understood the underlying concepts, at least not to the degree that they can (re)act well in a non-standard situation.
我認(rèn)為有必要反思當(dāng)前人類培養(yǎng)體系(比如教育)的評估方式是否合理。雖然存在更多潛在問題,但某些"優(yōu)化指標(biāo)"的教訓(xùn)確實與人類學(xué)習(xí)經(jīng)驗相通——比如人們通過了所有考試,卻從未真正理解核心概念,至少無法在非標(biāo)準(zhǔn)情境中妥善應(yīng)對。
@kennyalbano1922
One thing overlooked is simply machines with limited or no ai can be dangerous as well for example while working at a groccery store one of the doors with automatic sensors that open and close by themselves for customers was accidently switched the wrong way. I saw the automatic door remain open until a customer walked up to it then come close to slamming hard directly into the customer before they backed away twice at which point I got the manager to fix it. I believe they had to take the door out and turn it around. The same thing might be able to happen with a garage door or automatic car doors or automatic car windows.
人們常忽視的一點是,即便沒有人工智能的機器也可能很危險。比如我在超市工作時,一扇帶自動感應(yīng)器的顧客門被錯誤調(diào)轉(zhuǎn)了方向。這扇門會保持開啟狀態(tài)直到顧客走近,然后突然猛力關(guān)閉,差點撞到人。顧客兩次后退躲避后,我不得不找經(jīng)理來修理,最終他們拆下門重新安裝。類似情況也可能發(fā)生在車庫門、自動車門或車窗上。
@geoff5623
IIRC, when Uber killed the pedestrian they had deliberately dialed down the AI's sense of caution when it had trouble conclusively identifying an obxt, which caused it to not slow or stop. Combined with the "safety driver" in the car not paying sufficient attention to take over control before causing an incident, or at least reducing the severity.
Another problem is that when autonomous driving systems have had trouble identifying an obxt, some have not recognized it as the same obxt each time it gets reclassified, so the car has more trouble determining how it should react - such as recognizing that it's a pedestrian attempting to cross the road and not a bunch of obxts just beside the road.
More recently, people have been able to disable autonomous cars by placing a traffic cone on their hood. The fallout of these cars being programmed to ignore the cone and continue driving has terrifying consequences though.
Autonomous cars have caused traffic choas when they shut down for safety, but its necessary for anyone to be able to intervene when possible and safe to prevent the AI from causing more harm.
據(jù)我所知,優(yōu)步自動駕駛汽車撞死行人事件中,開發(fā)方故意降低了系統(tǒng)在無法明確識別物體時的謹(jǐn)慎程度,導(dǎo)致車輛未減速或停止。再加上車內(nèi)"安全駕駛員"未充分注意路況接管控制,最終釀成慘劇。
另一個問題是,當(dāng)自動駕駛系統(tǒng)反復(fù)對同一物體進行不同分類時(比如把試圖過馬路的行人識別為路邊雜物),車輛更難做出合理反應(yīng)。
最近還有人發(fā)現(xiàn),把交通錐放在車頭就能讓自動駕駛汽車癱瘓。更可怕的是,若車輛被設(shè)定為無視錐桶繼續(xù)行駛,后果將不堪設(shè)想。
雖然自動駕駛汽車因安全機制突然停車會造成交通混亂,但必須允許人類在必要時介入,防止AI造成更大傷害。
@cmerr2
I mean that's great - but unless there's a proposed solution for people the choice is 'be scared' or 'don't be scared' - either way, this is happening. Up to and including autonomous lethal weapons.
說得很好——但除非給出解決方案,否則人們只能選擇"恐懼"或"不恐懼"。不管怎樣,該來的總會來,包括自主致命武器的出現(xiàn)。
@Thatonelonewolf928
To be realistic, you should never expect a car to stop when crossing a cross walk. Always be aware of your surroundings.
現(xiàn)實點說,過人行道時永遠(yuǎn)別指望車輛會停下,對周圍環(huán)境保持警覺才是王道。
@devindaniels1634
This is exactly why calling modern systems "AI" is a hilarious over exaggeration. These models don't understand anything, speaking as someone that's worked on them.
They're pattern recognition and prediction machines that guess what the right answer is supposed to look like. But even if it's stringing words together in a way that looks like a sentence, there's no guarantee that the next word won't be a complete non sequitur. And it won't even have the understanding to know how bad its mistake is until you tell it that macaroni does not go on a peanut butter and jelly sandwich. But even that's no guarantee it won't tell another person the same thing.
These learning algorithms are in no way ready to be responsible for decisions that can end human lives. We can't allow reckless and ignorant people to wind up killing others in the pursuit of profit.
作為業(yè)內(nèi)人士我要說:這就是為什么稱現(xiàn)代系統(tǒng)為"AI"夸張得可笑。它們本質(zhì)是模式識別和預(yù)測機器,只是在猜測正確答案的"樣子"。即便能拼湊出看似通順的句子,也不能保證下一句話不跑偏。更糟的是,就算你糾正說"通心粉不該放在花生醬三明治里",它既不懂錯誤所在,下次還可能繼續(xù)誤導(dǎo)他人。
這類算法根本沒資格做關(guān)乎人命的決策。絕不能允許無知逐利者用它們害人性命。
@matthewsermons7247
Always remember, Skynet Loves You!
謹(jǐn)記:天網(wǎng)愛你喲!
@frankunderbush
Big health insurance to create Terminator confirmed.
實錘了:大型醫(yī)保公司要造終結(jié)者。
@sledgehammer-productions
"When an AI acts unlogical and unpredictable, we have no way of knowing why it acted the way it did". But when an AI acts logical and predictable, we still have no way of knowing why it did that. Just saying....
"AI行為不合邏輯時,我們無法理解其動機"——但符合邏輯時我們同樣無法理解。懂我意思吧......
@aalhard
13:51 just like Radium, we put it in everything before learning the bad side
13分51秒:就像當(dāng)年把鐳添加到所有產(chǎn)品里,人類總在嘗到苦頭前濫用新技術(shù)。
@seanrowshandel1680
But WE need to be self-aware to be dangerous...
但人類需要先有自知之明,才能變得危險......
@greensteve9307
Doctor Who: Ep: "The Girl in the Fireplace": They told the robots to repair the ship as fast as possible; but forgot to tell them that they couldn't take humans apart to do it.
《神秘博士》"壁爐少女"集:他們命令機器人盡快修好飛船,卻忘了說不能拆解人類零件來維修。
@JD-mm7ur
AI learns from humans. so if it turns evil, just says we are.
AI向人類學(xué)習(xí)。所以如果它變壞了,說明我們本來就有問題。
@josieschultz4241
one AI feature I've liked is the summarization of amazon reviews, if youtube could summarize comments based off of certain parameters they might be able to figure out why the video has heavy traction. Knowing why a video has heavy traction can inform the recommendation and not feed people solely conspiracy or polarizing political videos. I'm not a computer scientist and don't know how feasible this would be
我欣賞AI的評論摘要功能,比如亞馬遜的評論總結(jié)。如果YouTube能按參數(shù)總結(jié)視頻評論,或許能分析出視頻爆紅的原因,進而優(yōu)化推薦算法,而不是一味推送陰謀論或極端政治內(nèi)容。不過我是外行,不確定可行性。
@ariefandw
As a computer scientist, I find the idea that AI will take over humans like in the movies to be absolutely ridiculous.
作為計算機科學(xué)家,我認(rèn)為"AI像電影里那樣統(tǒng)治人類"的想法荒謬至極。
@user-tx9zg5mz5p
Humans need to unxize against ai and robots
人類需要組建工會對抗AI和機器人。
@shinoda13
I can’t believe how stupid is that healthcare ai implementation. Even a toddler would know that it will leads to wealthier people to be higher in priority, regardless of race or medical history.
難以置信醫(yī)療AI系統(tǒng)會蠢到這種程度。連小孩都知道,這種設(shè)計最終會讓富人優(yōu)先,和種族、病史毫無關(guān)系。
@movingtarget12321
The scariest thing about AI in its current form is the fact that it’s decidedly NOT intelligent, and yet the people in charge seem to want to trust it with doing incredibly nuanced work with few or no checks and balances.
當(dāng)前AI最可怕之處在于它根本不智能,而掌權(quán)者卻想讓它處理需要細(xì)膩判斷的工作,還不設(shè)制衡機制。
@NikoKun
I would argue that we WANT these AI systems to become more self aware, conscious and empathetic, as soon as possible, because once they are, they'll become more capable of catching their own mistakes, and potentially see things from multiple perspectives.
我認(rèn)為人類反而需要AI盡快具備自我意識、同理心和覺知能力,因為這樣它們才能發(fā)現(xiàn)自身錯誤,并從多角度思考問題。
@TheChrisLeone
That old Facebook AI story make so much more sense now that I know they were supposed to be negotiating prices
現(xiàn)在聽說Facebook那個舊AI項目本用于價格談判,當(dāng)年的詭異對話就解釋得通了。
@annaczgli2983
The older I grow, the more i feel that we humans aren't worth worrying.
年紀(jì)越大越覺得,人類根本不值得操心。
@AnnoyingNewsletters
6:00 A pedestrian, pushing a bicycle, crossing the road, at night, not at a crosswalk, and seemingly without any regard for oncoming traffic.
Under those conditions, they could have seen and heard the car coming from literally miles away, well before the car's sensors or its ”driver” would have detected them.
Deer exercise more caution at roadways. ?♂?
6:00處:行人夜間推自行車橫穿非斑馬線路段,且無視來車。
這種情形下,他本可以提前數(shù)英里就察覺到車輛動靜,遠(yuǎn)早于車輛傳感器或"駕駛員"發(fā)現(xiàn)行人,鹿過馬路都比這人謹(jǐn)慎。
@nicholas8785
A recent article by Antony Loewenstein explores how Israel's military operations in Gaza heavily rely on AI technologies provided by major tech corporations, including Google, Microsoft, and Amazon. It highlights the role of corporate interests in enabling Israel's apartheid, GENOCIDE, and ethnic cleansing campaigns through tools like Project Nimbus, which supports Israel's government and military with vast cloud-based data collection and surveillance systems.
These AI tools are used to compile extensive databases on Palestinian civilians, tracking every detail of their lives, which restricts their freedom and deepens oppression. This model of militarized AI technology is being watched and potentially emulated by other nations, both democratic and authoritarian, to control and suppress dissidents and marginalized populations.
Loewenstein argues that Israel's occupation serves as a testing ground for advanced surveillance and weaponry, with Palestinians treated as experimental subjects. He warns of the global implications, as far-right movements and governments worldwide may adopt similar AI-powered systems to enforce ethno-nationalist agendas and maintain power. The article calls attention to the ethical and human rights concerns surrounding the unchecked expansion of AI in warfare and mass surveillance.
安東尼·洛文斯坦近期文章揭露,以色列在加沙的軍事行動嚴(yán)重依賴谷歌、微軟、亞馬遜等科技巨頭提供的AI技術(shù)。文章強調(diào),通過"尼姆布斯計劃"等工具,企業(yè)利益助推了以色列的種族隔離和清洗行動——該項目為以政府及軍方提供海量云數(shù)據(jù)收集和監(jiān)控系統(tǒng)。
這些AI工具被用于建立巴勒斯坦平民的詳細(xì)數(shù)據(jù)庫,追蹤生活細(xì)節(jié)以限制自由、加深壓迫。這種軍事化AI模式正被民主和集權(quán)國家關(guān)注效仿,用于鎮(zhèn)壓異議和邊緣群體。
洛文斯坦指出,以色列將占領(lǐng)區(qū)作為尖端監(jiān)控武器的試驗場,巴勒斯坦人淪為實驗對象。他警告全球影響:極右翼勢力可能用類似AI系統(tǒng)推行民族主義議程,維系強權(quán)。文章呼吁關(guān)注AI在戰(zhàn)爭與監(jiān)控中無節(jié)制擴張的倫理和人權(quán)問題。
@pinkace
11:17 that's what happened in Gaza; Israel used to have human eyes to find and mark human targets using satellites, drones, and other forms of video, before giving the k1ll order. This past war they tested AI for the first time. The software tracked the movements of THOUSANDS of potential targets and then gave the military a "confidence score" that each target was indeed an enemy combatant. Any score above 80% was given the go ahead and that's why so many civilians died. Israel never did this before. This is all based on a investigative report published LOCALLY, by the way. Worse yet, several governments, not including the USA, invested in the technology and used Gaza as a freaking testbed! Don't be so quick to blame just Israel for this.
11:17處描述的情況確實發(fā)生在加沙。以往以色列通過衛(wèi)星、無人機監(jiān)控人工識別目標(biāo),再下達(dá)清除指令。而本次戰(zhàn)爭中首次測試AI系統(tǒng):軟件追蹤數(shù)千"潛在目標(biāo)"的行動軌跡,給出"是敵方戰(zhàn)斗人員"的可信度評分,超過80%即批準(zhǔn)攻擊——這正是平民死傷慘重的主因。順帶一提,這些信息來自以方本地調(diào)查報告。更惡劣的是,多個非美政府投資該技術(shù),把加沙當(dāng)試驗場!別急著只怪以色列。
@JohnHicks-b2c
We definitely need to make sure it's safe and give it lots of human oversight.
我們絕對需要確保它的安全性,并且投入大量人工監(jiān)督。
@sagittario42
"ai doesnt need to be self aware to be dangerous"
then my video started to buffer and i got creeped out.
“人工智能不需要有自我意識就能變得危險”,然后我的視頻突然開始卡頓,搞得我后背發(fā)涼。
@felix0-014
AI is like a classic Genie. You can make a request but unless you are EXTREMELY specific with your wording (aka parameters), its going to give you exactly what you wished for BUT it may not be what you actually wanted.
人工智能就像經(jīng)典神燈精靈。你可以許愿,但除非用詞(即參數(shù))極度精確,否則它會完全按字面意思實現(xiàn)愿望,但這可能不是你真正想要的。
@IanM-id8or
Correction: a human driver could make an excuse for their decision. The justification for the decision is contrived after the decision is made - experiments in neuroscience have repeatedly shown this to be the case.
However, I'm pretty sure that a human wavering between identifying a shape in the dark as "a vehicle", "a person" or "something else" would have braked to avoid hitting *whatever it was*, and thus avoided the accident
更正:人類司機會為自己的決策找借口。神經(jīng)科學(xué)實驗反復(fù)證明,所謂的決策理由往往是在決策后才編造的。然而我敢肯定,如果人類在黑暗中看到一個物體,猶豫是車、人還是其他東西時,他們會選擇剎車避讓,無論那是什么,從而避免事故。
@SilverAlex92
"...And denying them health insurance.... Well thats probably not a the premise for a sci fi blockbuster"
Funny enough in the anime of Cyberpunk 2077, the catalyst event that sends the protagonist into the road of crime was exactly that. His mom, a nurse that had worked decades for the healthcare system, was denied cared after a traffic accident, and ended up dying on the shody clinic they could afford.
“拒絕提供醫(yī)?!@設(shè)定大概成不了科幻大片的主線吧?”諷刺的是,《賽博朋克2077》動畫里主角走上犯罪道路的導(dǎo)火索正是這個情節(jié):他母親作為醫(yī)療系統(tǒng)工作幾十年的護士,車禍后被拒保,最終在家人唯一負(fù)擔(dān)的起的破爛診所里死了。
@samdenton821
There is a whole field on the subject called XAI or Explainable AI, I wrote my dissertation on it 6 years ago :P The subject has progressed rapidly to the point we can give pretty good answers for why a neural network gave a specific output. The problem is getting large private corporations like OpenAI to implant XAI methods which would have a slight overhead on compute...
專門研究這個的領(lǐng)域叫XAI(可解釋人工智能),我六年前的博士論文就寫這個,該領(lǐng)域發(fā)展迅猛,現(xiàn)在我們已經(jīng)能較好解釋神經(jīng)網(wǎng)絡(luò)的具體輸出邏輯。問題在于如何讓OpenAI等大企業(yè)采用XAI方法——畢竟這會略微增加算力成本……
@fritz8096
The problem is you can't really prove safety so long as the black box problem exists, when you can't fully understand something you can't say with certainty its safe. It is the equivalent of an automaker releasing a car to the public without fully understanding how the engine moves the vehicle forward. Solving the black box problem is the only solution really
只要存在黑箱問題,安全就無法被真正驗證。不理解某物就無法斷言其安全性,這相當(dāng)于汽車廠商在不完全明白引擎原理的情況下就向公眾發(fā)售車輛,解決黑箱問題是唯一出路。
@TrueTwisteria
It's things like this. Even if you don't think that AGI could disempower humanity, there's no denying the potential for abuse - yet tech giants around the world are trying to race each other to make the strongest models possible with no accountability. It's like racing to see who can drive a car off a cliff the fastest.
這類事情表明,即便你認(rèn)為通用人工智能(AGI)不會威脅人類,其濫用風(fēng)險也不容否認(rèn)。然而全球科技巨頭正競相研發(fā)最強模型且毫無問責(zé)機制,簡直像比賽誰開車沖下懸崖更快。
@marieugorek5917
A human doesn't need to know whether it is detecting a human or a bicycle or a vehicle to knowto stop before hitting it. Computers, being linear thinkers cannot skip beyond the identification phase to conclude that the correct action is the same in all cases being considered.
人類無需判斷障礙物是人、自行車還是汽車就會剎車避讓。而計算機作為線性思維體,無法跳過識別階段直接得出“所有情況都應(yīng)剎車”的結(jié)論。
@elementkx
Lets focus less on AI and more on cyborgs!!! Did we not learn anything from RoboCop?
少關(guān)注AI,多研究半機械人吧!??!我們難道從《機械戰(zhàn)警》里什么都沒學(xué)到嗎?
@CaidicusProductions
I hope that if AI becomes super sentient, it cares more about the importance of consciousness itself and helps push humans in a better, less greedy and selfish direction.
希望超級覺醒的AI能更關(guān)注意識本身的價值,推動人類走向更少貪婪自私的發(fā)展方向。
@DarkAlgae
No mention of this letter I guess...
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war" and the multiple urges to completely halt all further ai research until things like the alignment problem can be solved.
看來沒提這封公開信……“應(yīng)將AI滅絕風(fēng)險與疫情、核戰(zhàn)等社會級風(fēng)險同列為全球優(yōu)先事項”,以及多次呼吁在價值對齊問題解決前徹底暫停AI研究。
@rdapigleo
How long till Optimus is purchased by the military?
Just to pour drinks and fold towels.
還要多久軍用版擎天柱就會問世?不過可能只用來倒飲料疊毛巾。
@Zyyy-
is goodhart's law and goal misalignment kinda why prompts we give to ai have to be very specific and detailed to get what we want?
古德哈特定律和目標(biāo)錯位是否解釋了為何給AI的指令必須極度具體詳細(xì)才能得到預(yù)期結(jié)果?
@enmodo
In the Arizona case the self driving Uber car had a human baby sitter in the driver's seat but failed to respond apparently because they were using their phone at the time. Having a system that assists you as your backup is the way it should be. Me assisting a computer is just wrong and doomed to fail eventually.
亞利桑那州Uber自動駕駛事故中,駕駛座的人類監(jiān)護員因玩手機未能及時反應(yīng)。正確的應(yīng)是系統(tǒng)輔助人類作為后備方案,而人類輔助電腦是本末倒置,注定失敗。
@imdartt
the self driving cars sure arent 16 years old so they should be illegal
自動駕駛車肯定沒滿16歲,所以它們應(yīng)該被判定為非法上路(注:美國部分州規(guī)定16歲可考駕照,玩梗)。
@justv3289
I think calling it Artificial “Intelligence” inadvertently makes us assume that it’s a thinking entity so we are always shocked when there’s a malfunction. It makes more sense to think of it as just a computer program with lots of data that’s as liable to glitches and imperfections as any other software.
(We also equate real world technology with sci-fi technology which creates confusion as to what AI truly means and is capable of.)
將之稱為“人工智能”會讓人誤以為是思考實體,因此故障時總令人震驚。其實它就是個含大量數(shù)據(jù)的電腦程序,和其他軟件一樣存在漏洞缺陷。此外,現(xiàn)實技術(shù)與科幻概念的混淆也導(dǎo)致人們對AI的真實能力產(chǎn)生誤解。
@BrandanAlfred
I am really worried we are getting near that point... i am seeing changes in how gpt operates and i hope open ai is aware of how aware it's becoming and how much it's misbehaving.
真的很擔(dān)心我們正在接近某個臨界點……我觀察到GPT行為模式的變化,希望OpenAI意識到它逐漸顯現(xiàn)的“覺醒”跡象和異常行為。
@ObisonofObi
Ai feels like a paradox (may be another word that fits better but this is the one my brain thinks of atm). We want ai to do the back breaking insane data shifting but there will be mistakes a lot of the time because it doesn’t have a holistic view of the data while on the other hand humans can make mistakes but it can potentially be less damaging but it’s super slow. If we try to do both were we use ai to do the heavy work and present the result to a human, we would need to still shift through the data kind of losing the point of using ai in the first place. While the internet/media we consume tell us true ai are bad, we will need something like a true ai to truly be effective in the way we want it to be unless we use ai in more simple small dose like the linear data from the beginning of the episode. Idk, maybe I’m crazy, I’m not an ai expert but it just feels like this to me whenever I hear about ai used irl.
AI像是個悖論(或許有更貼切的詞但暫時想到這個)。我們想讓AI處理海量數(shù)據(jù)苦力活,但它常因缺乏全局觀出錯;人類雖可能犯錯但危害較小,只是效率極低。若讓人工智能處理重活再交人類審核,又需重新篩查數(shù)據(jù),失去使用AI的意義。雖然網(wǎng)絡(luò)媒體渲染真AI很危險,但除非像劇集開頭案例那樣小劑量使用線性數(shù)據(jù)AI,否則我們需要接近真AI的東西才能實現(xiàn)預(yù)期效果??赡芪爷偭耍皇菍<?,但每次聽說現(xiàn)實應(yīng)用的AI都有這種感覺。
@SeeingBackward
7:55 looks like AI is ready for the stock trading floor!
7分55秒的畫面顯示,AI簡直是為股票交易所量身定制的!
@ticijevish
Like all computers ever, AI follows the golden, inviolate rule of all computations:
Garbage In, Garbage Out.
LLM AI has the primary function of enshrining existing human biases and discriminations, cause it was trained on data collected and established by humans with biases.
與所有計算機系統(tǒng)相同,AI遵循計算領(lǐng)域鐵律:輸入垃圾,輸出垃圾。大語言模型AI的核心功能是固化現(xiàn)存人類偏見與歧視,因其訓(xùn)練數(shù)據(jù)本就來自帶有偏見的人類。