Australia must join a globally co-ordinated campaign to ensure the safe development of AI, which is reaching a critical phase that could have disastrous consequences.
The AIs we have been building so far have no motive at all. Really, the danger at this point is not that they will go rogue and kill us all. The danger is that they will do exactly as they are told, when someone tells them to kill us all. Not that they are anywhere close to having that capability.
It’s a bit hard to imagine how all AIs could break simultaneously. Nothing short of a full-on apocalypse, and then fixing AIs would be the least of humanity’s problems.
And I’d guess in the future there would always be some local/open-source/offline AI that might be able to recreate (or help recreate) larger systems.
The AIs we have been building so far have no motive at all. Really, the danger at this point is not that they will go rogue and kill us all. The danger is that they will do exactly as they are told, when someone tells them to kill us all. Not that they are anywhere close to having that capability.
Consider a worse fate: they do exactly as we tell them to, until we become incapable of existing apart from them.
And then they break with no one to fix them.
It’s a bit hard to imagine how all AIs could break simultaneously. Nothing short of a full-on apocalypse, and then fixing AIs would be the least of humanity’s problems.
And I’d guess in the future there would always be some local/open-source/offline AI that might be able to recreate (or help recreate) larger systems.