Reflexive, Reactive, Reflective Intelligence
In robotics, the 'degree' of intelligence of an autonomous system is sometimes classified into 3 levels: reflexive, reactive, and reflective.
Reflexive intelligence merely responds to a 'known' stimuli with a singular invariant response. x : --> y , where x is the stimuli and y is the response. In a virtual world, reflexive intelligence is entirely incapable of identifying the stimuli as being sourced from a virtual/artificial source.
Reactive intelligence responds to a stimuli with a variant response. Here, x : f_t(x) --> y , where x is the stimuli, y is the response and f_t() is the behavior of the autonomous system. The behavior of the system can change over time. It can also be trained. These attributes put reactive intelligence well beyond mere reflexive intelligence. However, even reactive intelligence can not independently distinguish between stimuli from real vs. virtual world.
Reflective intelligence responds to a stimuli with a response that based in part on the stimuli and in part on the state of the system and its behavior. Here x : f_t(x, s) --> y, where s is the state of the system. With reflective intelligence it is entirely possible for the response to a stimuli being entirely independent of the stimuli and based solely on the state of the system; x : f_t(s) --> y.
Multi-layer abstraction
Unlike the previous lower levels of intelligence, the system modifies itself even in the absence of external stimuli. The system state (s) keeps changing independent of the external world. These changes are not random. They are based on set of 'rules' or meta-parameters that the autonomous system uses to govern its change of state (s). These meta-parameters can be structured in layers, where parameters (rules) at a given layer are dependent on parameters at a deep layer. The deeper the meta-parameter, more invariant it is to change. All the meta-parameters in their layered structure together constitute the autonomous system's 'world view'.
Autonomous adaptation
As such, this type of intelligence could be considered introspective. This is where the reflective intelligence diverges sufficiently from lower levels of intelligence to potentially decipher the source of the stimuli it is receiving. The challenge here is for the virtual world creating stimuli. The virtual world does not know the state (s) of the autonomous system. When a stimuli-response loop is in progress, the virtual world will struggle to sustain a stimuli-response cycle that does not begin to diverge from the reflective intelligence's 'expectations', predicated by its 'world view'.
Expectation management
Therein lies the challenge towards building a universal virtual world. It can not 'fool' everyone equally. The reflective intelligence that has a deeper layered structure of rules/meta-parameters and a more robust rule update policy, will be more difficult to convince during a stimuli-response cycle. The lack of consistency with the 'real world' will emerge in pattern of sequence of stimuli.
Game Theory
Reflective intelligence employs a self-correcting mechanism. In doing so, it will intentionally produce a response that it knows is completely wrong for a given stimuli. The subsequent stimuli given by the 'real/virtual world' will tell the reflective intelligence about the world's traits. In essence, for reflective intelligence, the stimuli-response is a two-way street and both the world and the autonomous system are giving each other stimuli, instead of a stimuli-response.
Inception!
Pattern recognition in successively deeper layers of abstraction is a very powerful tool to approximate arbitrarily large and complex information. Lets see how this plays out.
A reflective intelligence 'expects' the world to have at least a reflective level of intelligence in the 'game theory' setting.
If the world then display a reflexive stimuli/response (Think extremely poorly coded NPC in a video game, acting in a mindbogglingly repetitive manner), the expectations are completely thrashed and the reflective intelligence knows it is trapped in a virtual world.
If the reflective intelligence intentionally 'jukes' the world in its response, the world my respond with a altered stimuli. While this a better than reflexive intelligence and passes the first layer of rule abstraction, the manner of change in the world's stimuli (going from f_1(y) to f_2(y) to f_3(y)) will not align with the expectations about pattern of change of world stimuli and this fails the second layer of rule abstraction.
If the world responds to the myriad of 'jukes' and brilliantly varied behavior of the reflective intelligence that never sufficiently violates its expectation for arbitrarily deep layer of abstracted rule, then the virtual world has successful supplanted the real world for that specimen of reflective intelligence.
Enter Morpheus!
Even, if the virtual world successfully learnt to consistently meet the expectations of individuals, there still remains a layer of abstraction, which is beyond reflective intelligence: a type of swarm intelligence.
However, swarm intelligence is beyond the scope of the OP's question of mind control. I'll just say, when individuals are limited by their isolated intellectual capacity, they can team up to pool together their individual and unique layers of abstraction. With this dialogue the fact that the virtual world has adapted to provide an individual a tailor made stimuli-response experience will emerge. I can not recall any Sci-Fi literature that describes a virtual world that can convincingly adapt to an entire group of unique reflective-intelligence individuals.
Conclusion
The answer to the OP's question of defeating a virtual world's entrapment in a nutshell is: use deception and teamwork to outsmart the adversary!