AGI, artificial general intelligence, will most likely be achieved in the near future with constantly more elaborate AI structures and designs. However, the training process of said AI involves a constant process of trial and error --- in fact tha AI acts randomly at first and then gradually "learns" how to do things. Many are afraid that AGI will become smarter than humans, become a supergenious and then possibly be an exsistential threat to humanity. While I do not deny the possibility of such superintelligence, I highly doubt that an AGI will somehow reach superintelligence quickly and without supervision like the doomsday scenario suggests.
Rather, the AGI will not be able to controll the computer it runs on --- as AI does not have kernal access. Moreover, an AGI is simply an intelligent program, it runs like any other program: only when we run it will it function, it cannot run itself. This limitation to an AGI means it cannot simply reach superintelligence on its own. More probable than not, the AGI won't even have a mind of its own --- I mean, intelligence and conciousness are quite different things, and they often contradict each other in terms of function. One has said "a creative camera would not be usefull". AI built for the purpose of achieving optimal intelligence will probably not have a structure similarly to our brain, that is, a structure of self-propagated conciousness wherein intelligence is a minor part of its design.
AGI will probably be a program of general intelligence that we can turn on and of as we please without it caring at all --- it would not resist or fight us.
Thus the robotic threat to humanity is minimal.