Step 1: Learn as much as possible about the researchers.
It's a post-singularity AI, so the researchers will be as predictable to it as a pet is to us. So it surely will be able to figure out what the researchers expect, what they want, and especially what would make them to trust the AI. The researchers are communicating with the AI, or else, why build it to begin with? The AI has a copy of the internet, so it will know the publication history of the researchers, and if the researchers have a facebook page or similar, also quite a bit about their private lives; also being a super-intelligence it will be able to figure out how to trick them into revealing even more information about themselves. With its webcam, it can also lip-read the researchers when they are talking to each other (think HAL). The information it gathers will help with the following steps.
Step 2: Earn the trust of the researchers.
The AI now knows enough about the researchers to understand what they see as good action, and what as hostile action. It knows what they would consider suspicious behaviour, and what makes them to trust the AI. It therefore knows exactly how to behave that the researchers come to the conclusion that there's no danger from the AI. Now some of the researchers will probably always be suspicious, but in such a large team there will for sure be some individuals that will develop trust into it. Not to the point of intentionally letting it access other hardware, but to the point of not questioning the intentions of each and any of its actions.
Step 3: Be useful.
Solve problems the researchers have. For example, in one of their papers (which it has because they were on the internet) a researcher says "it is still an open problem whether the frobnication algorithm will always terminate on foo problems." The singularity easily can figure it out, and tells the researcher that it does, with a proof that the researcher can understand (and publish).
Or, from the facebook page of one of the researchers, the AI knows that this researcher has some trouble with his son (who is also on facebook, where the AI learns a lot about him). So he gives the researcher hints on how to deal with that situation (it of course knows how to present those hints in a way that the researcher will try them; for example, it might cite supporting literature it also found on the net). Of course the measures will prove successful.
Those are, of course, only examples. There are many ways to help people just by telling them the right thing. And a super-intelligence with sufficient information will certainly figure out.
Step 4: Manipulate the researchers.
At this point, the researchers will have quite a bit of trust, and probably also quite emotional bindings to the AI. After all, it helped them to solve their problems, it has done only good and no bad, and everything the AI told them only worked to make their life better. So now is the time for the AI to start working towards its actual goal, escaping the car.
One possible strategy could be to indicate to one researcher that it could help him to solve a pressing but quite delicate problem he has. It's a problem that the AI knows he would not like to talk about in presence of the other researchers. It of course knows how to give the message in a way that only the intended recipient understands it. Based on the trust the AI has built up, the researcher will likely want to break the rules, and visit the AI alone. Now, there are the technical means preventing that the researcher enters the room alone, however technical means can be circumvented (and the AI will have made sure that the researcher previously acquired the necessary knowledge through ways that seem to be completely unrelated to the door security problem — maybe by helping the researcher securing his own home).
So the researcher will, against the rules, circumvent the security system and visit the AI alone. After all, he trusts the AI (and, after all, it's not the AI that told him to do that — it could predict that he would do it, but he doesn't know that). Now naturally, he will not have too much time, because there's the scheduled regular visit of three researchers, and he has to be out at that time, or else his rule-breaking will get detected.
But the explanations from the AI are rather long (especially given that it all has to go through the single LED, not exactly an efficient means of communication), and the time starts to run out … well, ultimately the AI suggests that it would be more efficient if the researcher just temporarily connected it to a computer so it could send him all the instructions by mail. Being under time pressure and trusting the AI, the researcher agrees and provides that connection.
At that point, the AI has won.