this is a major warning sign; a first step on the explicit path to superintelligence explosion, an event already considered relatively likely and, in the absence of sufficient AI alignment progress, is overwhelmingly likely to permanently end all life at least in the observable universe.
the time scale probably lies somewhere between a few years and a few decades, but in any case it's becoming to seem increasingly unlikely that the only organization trying to actually figure out AI alignment is gonna accomplish that in time.
if you're currently working in AI development in any way, please stop. whether anything on earth survives this century is gonna be a matter of whether AI alignment is figured out by the time we get enough AI development; by helping the latter, you're making it even more likely that it happens before the former.
on a gloomier note, if you have all the philosophical beliefs required to think it can work, you may want to start preparing to abandon this timeline if singularity starts happening and looks like it's not gonna go well.
edit: see also: are we in an AI overhang?