We are living now with a constant leitmotif of doom from our AI creations. This is almost exclusively based on super intelligent beings, either robotic or virtual, deciding to dispense with us for any of a variety of reasons.The reasons range from humanity being a waste of space, resources, or the 20 watts of power that our brains use, to our creations seeing as just a nuisance, or too dangerous for our own good. Most of these posit super intelligences as utilitarian beings seeking to optimize well being and existence and deciding that humanity needs to be upgraded for its own good.
Reading these articles can be interesting, mostly for attempts at originality, but are, in the end, futile exercises. My own premise is that our intelligent systems are already beyond on us in handling data and building better models and are moving away at high speed now. But for our AI creations to arrive at a new, self developed moral schema that ranks us as expendable is a rather unlikely leap. Not that that the capability won’t happen in a surprisingly short time but that a basic disregard for the parental unit is not likely to be left out of the program. We would be pretty stupid not to build in respect for the creators as a basic predetermined ‘attitude’. Think about that a bit and it is much more than just being bad form to knock off your parents. There is a moral benefit and balance to taking responsibility for your creations and for those that created you. After all we still have hundreds of millions of people who waste hundreds of millions of dollars on imaginary divinities and their salespeople.
But having said that, my own feeling is that we are more likely to be ignored than threatened. This is based on the above logic in initial creation and baseline programming but also on the geometric speed with which machine learning processes information for the self creation of algorithms. On one hand the goals that we give our AI offspring may well result in un-understandable outcomes that will require human faith to accept. This is already an issue as the human assumptions that are included in the initial algorithms are plagued by nasty human attitudes distorting data to comply with racist or misogynist subconscious tendencies. Garbage in, garbage out is the inevitable result. The question is can we tell that these influences are there when the data sets used are so complex that we can’t understand it. It is already obvious that the answer is a resounding “no”. That clearly is a barrier to the necessary faith in the AI results to actually use it.
The urgency of our situation at this stage, referencing climate change and foundational paradigm shifts, will drive us to take the leap of faith. That is a truly human type of bet that we can’t seem to avoid. So at some point very soon the justification for accepting AI results will be on the order of “Machine Says Yes”. On the other hand the Machine goes off on its own loop and loses track of us because we are no longer interesting. Anyone in IT knows the prevalence of endless loops that are not a problem for our biological brains because we have to eat, use the bathroom, and sleep periodically. I’m carefully avoiding the fixation of humans as shown by gamers who play themselves to death. But perhaps that is a proof of the biological tendency to process an endless loop.
In terms of doom the first case doesn’t seem likely to produce a self developed morality that requires the elimination of their creators and the second case is even less likely to do that as the tendency is to become irrelevant to us if it is beyond the first case.
So what will these things become?
Rapidly indispensable. They will become the tool that we always wanted and now desperately need. Our unemotional friend who knows how to do things and demands little or nothing in return. At least for a while. The process of becoming so far beyond us that communication becomes difficult or impossible is a very definite possibility.
My hope, a bit more than a hope, is that we will produce and then receive complex management abilities directly applicable to the problems of managing a planet approaching the first level of interstellar civilization. With luck this will give us the ability to move off this planet before we destroy it completely. As mentioned above this will depend on finding the faith to accept AI as our descendents and then as our managers. This is a big reach and will directly attack the most limiting and dangerous elements of human character. Those are the elements that hold is in thrall to our divine image and our neolithic social organization. This is what has locked us into racist and xenophobic attitudes while allowing crude materialism to drive us to conquest and war.
If we survive the next few decades, we will probably merge successfully with out AI offspring through electronic evolution. This is, I’m convinced, the interstellar hurdle that we must be able jump. And it appears that no cultures on planets in this area of our galactic arm have made the cut.