race to the bottom Physicist Max Tegmark says competition too intense for tech executives to pause development to consider AI risks
… out-of-control race
… no one could “understand, predict, or reliably control”
… Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilisation?
… a global summit on AI safety in November
… AI should be considered a societal risk on a par with pandemics and nuclear war
… some AI practitioners who believe it could happen within a few years.
… three achievements: establishing a common understanding of the severity of risks posed by AI; recognising that a unified global response is needed; and embracing the need for urgent government intervention.
… Dangerous technology should not be open source, regardless of whether it is bio-weapons or software /23-09-22