0

I don’t imagine scamming and hacking going away. How can someone be sure they’re back in reality not a simulation? It seems like it would be all too easy to trick someone into disclosing sensitive information.

What safety mechanisms might the designer of a mind-machine interface include to that customers can verify if they are in a simulation.

Adam Kabbeke
  • 1,973
  • 11
  • 20
  • It strongly depends on how the interface works. For example, if it has full access to the user's mindstate, then it does not matter - the interface can make you believe whatever it wants, discarding all contrary evidence. Otherwise, you need to think of something that exists in reality and that the interface cannot simulate, or does not know, and you can test without revealing what it is. – LSerni Oct 30 '22 at 11:20
  • @L.Dutch Umm .. did you refresh your memory of that old question you closed this one as a duplicate of? or did you just go by a swift summary glance at its title? .. because (to me at least) this isn't a duplicate of that closed question .. not even a little bit – Pelinore Oct 30 '22 at 12:28
  • @Pelinore, the answers all address the "can we tell reality from a simulation/dream?", which is what is being asked here – L.Dutch Oct 30 '22 at 12:35
  • 1
    This question differs in particulars of fact from the other but assuming perfect simulations through the computer brain interface the answers must necessity be identical for both, is that what you're thinking @L.Dutch .. in one circumstance we are real but plugged into a simulation, in the other we aren't real and are part of the simulation (the equivelent of a coded NPC bot in a game) .. you don't think there might be differences in potential answers inherent in those factual differences of the two conditions? – Pelinore Oct 30 '22 at 14:22
  • @LSerni A good example of that "does not know" is in the movie Inception, where they have personal totems. –  Oct 30 '22 at 21:00
  • However, answering the question literally - "What safety mechanisms might the designer of a mind-machine interface include" - they might add this knowledge, or e.g. an anatomic limitation - during the simulation you can't pinch yourself, or touch the back of your front teeth with your tongue (so, in reality, nobody can see you check, which might be embarrassing), or clench your abs. Doing so might be hard-wired to end the simulation. – LSerni Oct 30 '22 at 23:00
  • @L.Dutch another major difference between this question and the one you've chosen to close it as a duplicate for is this "What safety mechanisms might the designer of a mind-machine interface include to that customers can verify if they are in a simulation" .. in the one you've chosen as a duplicate the individual only knows the artificial 'reality' .. in this one they know both 'realities' and normally move from one to the other at will .. this one asks what built in design features might help prevent someone being unknowingly trapped by a third party .. not a duplicate, voting to reopen. – Pelinore Oct 31 '22 at 15:20

0 Answers0