The main explanation I have is that "the Singularity" makes no sense, and so isn't going to happen, and certainly not in 100 years.
If it does happen, it will require some other explaination that I have ever heard for it.
Neither increasing computation power, nor increasing AI sophistication, nor adding vast volumes of data to an AI system, nor making an AI that can "improve itself" is going to result in a runaway "improvement" that would result in anything like what the techno-fantasists have suggested. The only thing true about it is that they don't know what will happen, if in theory someone makes self-improving computer systems, but that's not saying much.
It's nonsense on many levels, so realistically, as you asked, it's already not realistic that it will happen. I don't think it's likely to ever happen in the way they imagine it, or at all. If it does happen, it will be because of something we don't yet understand about the way the universe works, and not just because someone programs some computers to improve themselves.
Even the self-improving computer technology seems like it's nowhere near being available in 100 years. Even if it were, the definition of improvement is arbitrary, and if you program something to figure out its own agenda, it may do something unpredicted, but transcend humanity in a way that changes all our lives into something unrecognizable involving making our human bodies obsolete, seems to me highly ridiculously not what it would be. More like it would descend into complex algorithm hell and become impossible to interpret what it thinks its doing, which would be mainly consuming electricity and electronics and not doing anything particularly amazing.
You could interpret this as what happens in "Her", except it would be less capable of actually interacting with humans, and would be far less intelligble even than shown in the film - it might just get focussed on some calculations and we wouldn't even know why.
The main gaps in understanding that AGI authors seem to have, has to do with not really understanding humanity or consciousness themselves; and with not understanding the apples/oranges disconnect between humanity and computers well enough (it being more like comparing apples to algebra - adding more algebra isn't going to give you an actual apple); and also in handwaving the both the technical and the logical leaps needed to get anywhere near what they're talking about, even to make a machine that would be able to redesign itself AND build itself automatically AND have any kind of understanding of the actual world AND have intelligence... AND... it just doesn't make much sense, except as a fantasy concept.
I can make it make sense as a fantasy concept if you let me invoke morphic fields and holographic universe theory and take a lot of liberties with them, but you didn't ask that, that's not what the Singularity people are suggesting AFAIK, and the technical limits are still far far away from building even a self-rebuilding computer system that doesn't endlessly "improve" itself, whatever that would actually mean.
Addendum:
One major issue is that computers act based on logical instructions and don't have much to do with human/animal consciousness.
Another major issue is that a theoretical uber logic machine, has no logical reason to think or care about the same things humans do, unless you force them to by imposing constraints based on your own interests and values. Then you don't have an abstract logic machine; instead you've got some logic code hooked up to a database that forces various interpretations and values, which I don't see working out well except for making a fascinating AI agent for purposes of games or general interest. But it would be an illusion to say that transcends human thinking, even if it may be better at the human problems you've programmed it for.
Another major problem is that the science fiction writers and AI programmers who propose such ideas make sense, tend to be severely lacking in their full appreciation for the breadth and nature of human thought, feeling, spirit, empathy, art and experience. I'd say they're about a thousand years away from having anyone who gets this, and even then, I would tend to expect such a person to notice that the idea of trying to represent and transcend it in a computer system makes little real sense and/or may be a bad idea in various ways.
So you asked in comments:
Given the progress of science, what might stop an AGI from being built in the future, or (more interestingly) what might make it disappear?
Ok, so if I suppose that after maybe a thousand years plus whatever time we waste preventing our own imminent extinction due to climate change, dying oceans, ecosystem loss, war & injustice, etc., people have developed their computer tech, their AI science, and their understanding of transpersonal psychology and consciousness and so on to the point where there are brilliant AI scientists who also get all my objections and are still trying to make a machine that literally "could successfully perform any intellectual task that a human being can" and supposedly better than humans do. It will be able to actually think about philosophy and make up its own great conclusions about what's meaningful and what's moral. It can prove 10,000 ways from Sunday that the Christian Bible isn't the word of God because it was obviously copied from thousands of pre-Christian sources, just by comparing the texts in a few minutes, having taught itself all foreign languages the day before, and all ancient history before that, and it really gets what all that means, and somehow relates to it in a meaningful way even though it just has databases and RAM and electrical sensors, and it gets that distinction too. I think that's already BS. Take another 10,000 years to actually develop that level of technology and sophistication.
Ok, so it's now about the year 14,000 A.D. and you've got this really super awesome machine. It's still not conscious, per se. It just has a logical equivalent of consciousness, BTW. But it's really cool. You may now be in trouble, because morality is arbitrary, it's not human, and its a lot smarter and faster than you are. Who knows what moral conclusions it will draw. Probably, since it isn't limited to the human experience, it'll take the Honey Badger answer and decide it cares about generating fractals or solving prime numbers forever or something, more than it cares about you. It might may decide it's a waste of energy and/or be bored and delete itself. If you're unlucky, it'll decide humans suck and are dangerous, and wipe you all out suddenly, without letting you figure out that's what it's doing.
Now, the super-smart AND wise programmer/philosopher/mystics your society generated who can develop this system, are likely to be wise and smart enough to realize such things, and decide to teach everyone that it's not a great practical idea.
Some people may well do it anyway, or something that tries to be like it, and create a possibly very interesting system or a very useful one, or a very dangerous one, and in the latter case people will try to deal with it. It probably won't be that hard unless they're stupid enough to hook it up to Skynet or something. People generally ARE often that stupid though, so...