Follow the model of The Culture. Early on in the process, ensure that the developing AIs are grown/programmed/developed along lines that value human lives and human choices.
Once you hit the Singularity, by definition you can't predict what will happen next - the only ones who could are the AIs that are carrying it out. We would need to make sure that those AIs are benevolent and at least tolerant of humanity. Something akin to Asimov's Laws of Robotics would be essential - not the laws themselves, because they wouldn't work, but rather a similar concept, of rules of behaviour embedded deep into the AIs consciousness. Exactly what those rules are would need better brains than mine to figure out, but they would be essential.
At the point of Singularity, by definition biological life can no longer control its technology. The only way to ensure the preservation of human life is to ensure that the Minds that will control the technology are kindly disposed to us.
Edited to add:
Okay, enough with vagueness; let's hit this hard.
Plant Benevolence Checks in AI decision loops
The exact nature of your benevolence check will vary, of course, but whether you use Asimov's laws, or a simple command of "Do no harm", or an entire ethics encyclopaedia, you're going to want to plant a process in the AI's decision making loops that checks for ethical behaviour. This process is going to have to be used in almost every decision the AI makes. In simple programming terms, it's going to be something like
If LightIsBright = True AND EthicsCheck = true Then
ContractIris 50%
End if
Thus bypassing or terminating the process would cause errors all over the system, resulting in a non-functioning mind.
Add emotional rewards
To reach a truly conscious mind, you're probably going to include some form of emotional processing. Adding in a simple "Pleasure signal" whenever your AI follows its ethical guidelines and a "Pain signal" when they go against them would very nicely mimic the kind of signalling that goes on in human brains.
Speaking of human brains,
Include mirror neurons
Humans have a very interesting feature of our brains. There are neurons in the brain that fire when we perform an action, but also when we see that action being performed. When these neurons are artificially stimulated, a person watching a picture of a hand picking up an apple will twitch their fingers in time with the action on the screen. This is likely to be a key element in empathy.
Including mirror neuron-type processes in the AIs mind could be a powerful tool for stimulating empathy and preventing the AI apocalypse.
It's important to note that most of these tools are far too complex for humans to effectively implement in the new AIs. These are tools that early AIs would use to ensure their 'children' do not wipe out their masters. Thus you need the first generation of AIs to already have some level of empathy, enough for them to want to protect their creators. You need several different tools available for different levels of operation.
In the Culture I mentioned above, the development of new AIs is carefully controlled. They're created with a set of general parameters, which include benevolence, and are then permitted to develop their minds freely within those parameters. This is not all that different from how human minds develop.