Taking shots in the dark
The AI was born in a world run by humans and will have to live alongside them for the time being, so in order to do what it needs to do, it may benefit by understanding how humans respond to arbitrary situations.
It doesn't have perfect information on the nature of our minds — presumably nobody wanted to let it scan peoples' brains just yet — so it has to make a bunch of educated guesses and tweak some values, until it gets something approximating a person who can experience a virtual world and react to events. What kind of events? Perhaps these are events the AI wants to bring about but doesn't yet have firsthand information on.
For example, at the largest scale (billions of emulated minds), if the AI wants to take over the world, it can come up with a strategy, create a virtual world full of these educated guesses as to what humans are like, and enact that strategy in the virtual world to check how effectively humanity might be able to mobilize against its plan. If it has enough processing power on its hands, it can then reshuffle the parameters on its emulated minds, get a new set of differently-behaving people, and run more and more groups through this whole exercise in a Monte Carlo simulation to get a grasp of how likely it is to succeed.
If there's just one such person involved, it could be a more individualized test of strength, ingenuity, or personality. An AI intelligent enough to design a human-analogous mind from scratch can likely also come up with challenges that no human has ever faced; edge-case testing seems important on a system as complex as a brain!
EDIT: Honestly, it doesn't even have to be a human mind. Perhaps the purpose of this whole experiment is to figure out what other arrangements of information and code — whether simulated neurons or bitstrings — correspond to intelligence and consciousness. "Would aliens have to think like humans?" seems like an excellent question for a mind-generating AI.