Having spent some forty years wondering and worrying about technologically driven paradigmatic change, I totally agree with the critical need to devote resources to understanding and preventing risks from self replicating systems. I also have no idea how to do this at our current stage of development.
As I write about this kind of thing a lot I will be brief here. We are, I think, damned in every direction. We have the ability to build systems that could become vastly more capable than ourselves. I also don’t think active malfeasance is an inherent risk. Ignorance combined with greed inadvertently triggering a line of development that simply loses track of us biological fore-bearers as relevant is a very real threat.
But we have thoroughly fucked up our planet, mostly by ignorance, and are severely challenged to even adequately acknowledge that. With a very short, conservative, timeline to be able to avoid major repercussions from that, we need all the ML assistance we can get.
And that doesn’t even address the failure to figure out how to manage ourselves politically on a planetary level.
My feeling is we need to replace ourselves with much better day to day managers than ourselves. Then, perhaps, we can get focused on human innovation and creativity while our planet is managed for us.
Then, if we haven’t been able to establish a clear set of moral and ethical standards that are logically imperative for any sentient mind, we deserve what we get.