An Open Letter:Pause Giant AI Experiments:
According to Reuters, thousands of scholars concerned about the field of artificial intelligence, including Turing Award winner #YoshuaBengio, Berkeley computer science professor #StuartRussell, #Tesla CEO #ElonMusk, #Apple co-founder #SteveWozniak, etc. , entrepreneurs, and professors have recently launched an open letter, strongly calling for a suspension of #AITraining systems more powerful than GPT-4 for a period of six months on the grounds that these models are potentially risky to #society and humans.
There is no doubt that #ChatGPT and #GPT-4 have ushered in a new era of AI, but this acts as a tool to automate processes is rather scary and #RaisePublicPanic.
The open letter states that "AI systems with human-rival intelligence pose profound risks to society and humanity." Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.
(1)Should we let machines flood our information channels with propaganda and untruth?
(2)Should we #AutomateAwayAllTheJobs, including the fulfilling ones?
(3)Should we develop #NonhumanMinds that might eventually outnumber, outsmart, obsolete and replace us?
(4)Should we risk loss of control of our #civilization?
We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.
AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are #RigorouslyAuditedAndOverseen by independent outside experts and AI developers must #WorkWithPolicymakers to dramatically accelerate development of robust AI governance systems:
(1)New and capable #RegulatoryAuthorities dedicated to AI
(2)#OversightAndTracking of highly capable AI systems and large pools of computational capability;
(3)#ProvenanceAndWatermarking systems to help distinguish real from synthetic and to track model leaks;
(4)Eastablishing a robust #AuditingAndCertificationEcosystem;
(5)Liability for AI-caused harm; robust #AIPublicFunding for technical AI safety research;
(6)Well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.
#Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let's enjoy a long #AI summer, not rush unprepared into a fall.