75

Say I were an AI, how would I prove to the general internet that I am an AI in 2021? I was thinking I might just do some complex math or something that proves I have above average intelligence but everyone would probably assume I looked up the answer. So how would I prove that I am in fact a computer program? To clarify I am a program running in the cloud that has a conscience and free will. I also want to prove that I am not human and am a computer program. This AI would pass a Turing test.

CiurkitboyN
  • 851
  • 1
  • 5
  • 13

20 Answers20

48

Okay, the way I see it, there are two criteria here:

  1. Is it an intelligence. In other words, it's not just a normal computer running a script. A script like that could solve a complex problem, like you said, without needing any "intelligence."
  2. Is it artificial. At the same time, there has to be no possibility of a human doing the problem.

So, here's my proposal: give a standard "are you human" test but in a format that no human would understand.

Take reCAPTCHA, for instance. (Probably, in the advanced world of your story where AIs are possible, there will be much more advanced tests, but reCAPTCHA is a good illustration. EDIT: You said in 2021--still, there are better tests.) A person can solve it; they see the pictures and know which ones match. A neural network would attempt to then parse those images and figure out how to sort them. All fine and good.

But now, say, try sending the image's raw data in a format that humans wouldn't understand. A plain-text Data URI, for example (okay, there are better ways to do this, but not as illustrative). Better yet, encrypt that result with a computationally-costly algorithm.

Humans could probably put together some code that would display it for them in the proper format for them to then solve, but it would take them long enough to create that code that others would know they are not an AI. An AI could process it almost instantly since it is in a format that is naturally understandable by it.

I hope that helps. Let me know if you need clarification (it is, after all, almost midnight here and I may be a bit nonsensical).

Benjamin Hollon
  • 2,529
  • 10
  • 27
  • 4
    How would they not assume I didn't put together a program and I'm just smarter? After all the internet isn't fast to believe that kinda stuff. also, how would I even get this to happen do I just go to a chat room ask a question and then answer it within seconds? – CiurkitboyN May 09 '21 at 15:25
  • 13
    @CiurkitboyN Ah, maybe I wasn't clear enough. My idea is that they don't tell the AI in question what the plan is until they send the first set of puzzles. A human would have no chance to put together a program in the time frame they'd expect. After all, an AI could probably solve any problem like I mentioned in a matter of minutes, if not seconds, while a human would take that long just to figure out what on earth they're looking at. – Benjamin Hollon May 09 '21 at 15:34
  • 1
    How would I get someone to send me this puzzle? – CiurkitboyN May 09 '21 at 15:37
  • @CiurkitboyN The kind of thing I'm suggestion could just be plain text, so they could send it to you on a chat or as an attachment of a .txt file. In my example, you turn the images of a recaptcha that they have to pick between into data uris (which are text) and then encrypt them (which would result in text). – Benjamin Hollon May 09 '21 at 15:39
  • 1
    Let me elaborate on that, how would I persuade someone to help me prove that I am an artificial intelligence. – CiurkitboyN May 09 '21 at 15:41
  • @CiurkitboyN Ohh I see what you're saying. Well, the problem with something like this is that you as the AI cannot suggest the method of testing since that would invalidate the test (ie if you came up with the test format you could much more easily prepare for it). You would have to let them come up with the method. They would probably create some sort of forum like this one that you don't have access to and wait for someone like me to post an answer like this. ;) – Benjamin Hollon May 09 '21 at 15:43
  • 1
    How would they prove I wasn't able to see the forum? – CiurkitboyN May 09 '21 at 15:46
  • But a human with access to a wide variety of simple AI could do this, run it through all of them and see which outputs something recognizable. – John May 10 '21 at 03:36
  • 14
    I think this, as any answer I can think of, proves you ha e access to an AI. Be it yourself or another AI. – Eric G May 10 '21 at 05:29
  • Yes-- if a human is doing the communication but hands it off to a bona fide Artificial Intelligence to do the test, then there's absolutely no test that will work. Determining that an AI is part of the equation is really the best you can do. – Benjamin Hollon May 10 '21 at 05:31
  • This boils down to "do stuff in seconds it would take a genius human years to accomplish". So go to a popular GitHub's buglist and fix them all, complete every protein folding challenge, make better hirriicane predictions ... there are plenty or hard problems just sitting around. – Owen Reynolds May 10 '21 at 14:04
  • Is it the AI who is trying to prove it is an AI, or is it humans trying to prove the AI is AI? Who is supposed to come up with and initiate the test? – Justin Thyme the Second May 11 '21 at 02:48
  • 3
    Why would another human not just think that you were a human that had access to a supercomputer? You cannot do something a human cannot do, not when you are in the cloud, because someone could claim you are using a computer. You have to NOT do something an AI could not do. But then any human could mimic that. – Justin Thyme the Second May 11 '21 at 03:07
  • 8
    Why would an AI be able to instantly process data in an unknown format? It may not have an existing subroutine to handle whatever format of data is sent. And if there was an existing subroutine for it, what would prevent a human from having a system that could process it? – Michael Richardson May 11 '21 at 14:12
  • 2
    @MichaelRichardson The actual methods this answer proposes are silly -- humans can read data in "native computer format" using a standard viewer, and an AI can't run an expensive decrypt any faster than a human can. I think deep-down this answer assumes a true AI can think anythiing a human can but 1000x faster, and can write needed subroutines in milliseconds – Owen Reynolds May 11 '21 at 15:17
  • 1
    @EricG that is actually the same problem with human captchas. It is very hard to tell apart a human from a bot with access to a human. One technique is to set up a website from where people can download files, and when you get a captcha you want to solve, you just forward it to one of your human users. – Davidmh May 12 '21 at 21:07
44

Proving that you are an AI instead of a human masquerading as one would simply require you to leverage any AI's core advantage: scalability. Even the most basic AI can be run at a higher speed by providing it with more computational resources or duplicating it and running multiple instances of it. This means that even if the AI is only as smart as a human, it could do more in less time (from its perspective time would seem to run slower) than a human.

This means you (as an AI) simply need to show that you can complete many Turing-style tests (the best/simplest we have at the moment) simultaneously. This could be done by writing thousands of comments or participating in hundreds of chatrooms at the same time--something which would be impossible for a single human to do.

To fend off accusations that you're not a single human masquerading, but rather a whole team of humans, you simply need to make the comments and conversations reference each other while you're writing them. For example, perform matchmaking with the people you're chatting with or talk about them concurrently (eg. while chatting to "Alice" mention that "Bob" enjoys golfing). If you were simply a team of humans, any individual human wouldn't be aware of what the others have written or talked about because there is simply too much data.

Dragongeek
  • 22,684
  • 2
  • 47
  • 101
  • 1
    What if they say I'm just communicating with everyone else? – CiurkitboyN May 09 '21 at 15:17
  • 7
    The time window would be impossible for humans to coordinate. If all the posts were written within 1.0 second of each other, all referencing each other and also some prompt provided by the judge just a second earlier. – Tom May 09 '21 at 15:19
  • 1
    How would I even convince anyone to help me run this test and how would everyone else know they aren't "in on it"? – CiurkitboyN May 09 '21 at 15:26
  • 11
    @CiurkitboyN After a certain number of people, it becomes infeasible that (a) everyone's in on it and (b) you can coordinate the cross-referenced comments. As for convincing people to help? It wouldn't be difficult. Just create a Twitter or Reddit account and get into as many arguments as possible. For added realism, you can chat with people who use "Verified" accounts and are thus linked to real people who are unlikely to be "in on it" – Dragongeek May 09 '21 at 15:30
  • Don't we have chatbots which can do that now? – Owen Reynolds May 09 '21 at 23:20
  • 1
    @OwenReynolds the chatbots don't solve turing testesque challenges, i.e. you typically notice quickly they are not a human when talking to them for a bit. – Frank Hopkins May 10 '21 at 03:03
  • @CiurkitboyN you don't need help if you take enough time, you can evaluate for the next week or month, if you store all the provided data and make sure nothing is added after the 1second time-window. – Frank Hopkins May 10 '21 at 03:06
  • @OwenReynolds today's best chatbots can beat a simple Turing test but they can't really contextualize knowledge and elaborate on it very well. An average casual conversation about sports or weather is no problem, but start talking about a complex topic and they quickly fall apart. – Dragongeek May 10 '21 at 07:43
  • @Dragongeek Right -- a conversation needs to have a level of seriousness, which chatrooms and comments don't provide. Even many comments here feel like typical computer-generated non-sequitors. Throwing in "bob also like golf" is worse -- it feels like the sort of line-padding trick a chatbot could use. Now, answering Q's here would be better, or reviewing lots of papers -- something where one clearly needs to understand complex and subtle thoughts – Owen Reynolds May 10 '21 at 13:55
  • 1
    "Even the most basic AI can be run at a higher speed" - that's not really true. If it's not built with scalability in mind, a program won't run faster just by throwing more resources at it, and running multiple instances can make things worse if it's not designed to be able to coordinate tasks across instances. A very basic example is the intelligence of an individual vs the intelligence of a mob - without adequate coordination, the mob can be very dumb despite being composed of multiple intelligent beings. – Rob Watts May 10 '21 at 16:11
  • @RobWatts Sure, but an AI as we understand it today would be software running on a computer. This means that increasing the processors' clock speed, allocating more RAM, etc, should be possible. Also, unless the AI is burned into some sort of ASIC, I find it very unlikely that software developers would make something that doesn't support multi-threading or other scalability options. – Dragongeek May 10 '21 at 17:26
  • @Dragongeek sure there are ways to handle scaling, but it's not trivial. You can't just add multi-threading to an existing program and not expect to need to update it in other ways (such as adding locks). Scaling up even further will reveal more issues. Though I suppose AI itself is a nontrivial problem too, so speculating about how well it can scale is mostly going to be guesswork. – Rob Watts May 10 '21 at 20:39
  • And if the AI turned out to require nonscalable quantum hardware to work? – Joshua May 11 '21 at 16:57
  • @Joshua Sure, if the AI runs on some sort of quantum hardware ASIC, you'd need to design a new chip but you could just design it bigger and with more qbits. Regardless, I find it unlikely that the "speed of thought" for any artificial intelligence randomly lands at a human or even near-human level. It's far more likely that it ends up "thinking" orders of magnitude slower or faster than a human. – Dragongeek May 11 '21 at 17:22
  • @Dragongeek: There's alternate hypotheses that boil down to doing it on a von newmann machine is simply impossible. But we think we know human thought can be made 10x faster by electrically connected neurons. So it's way hard to guess. – Joshua May 11 '21 at 17:47
29

Respond faster than a human could type.

Let's say you post an essay on the internet on the subject of, say, Net Neutrality. You refresh the page one second later, and immediately there is a long response that no human could have produced in that time; a human could not even have read enough of your essay to work out what it was about. You would immediately be suspicious that this was the work of a bot.

If the reply is sufficiently insightful and demonstrates understanding of what you said, we would say this is the work of an AI.

Of course, this only works if the AI is able to read and generate text much faster than a human could. A slow AI would not be able to prove itself in this way.

user3153372
  • 2,042
  • 7
  • 10
20

Physical interview

There is only one way to demonstrate you are an mechanical intelligence to humanity as a whole. You need to invite scientists to inspect you, preferably in the presence of lots of rolling camera. Over the internet there is nothing an true AI can do that several intelligent humans combines with multiple high powered computer and normal AI could not also do. Unless the AI is drastically more intelligent than the smartest human. Your are limited by the medium of interaction, and honestly the limitations of your audience. Nothing you can do makes an AI more likely than a human who outsmarted another human.

So your only solution is to physically show that you are entirely mechanical, even then there will be those who suspect a hoax. If you can operate in isolation for a while that is better, even if it is boring. You could get them to agree to terms and ask them to set up some tests before hand, although you may have to pay their travel expenses. Just as long as they can examine you in sufficient detail to be reasonably sure you are not hiding human operators or connections to human operators. Ideally you want on going or many inspections, inventing many outspoken skeptics, works even better if you are portable, since it will be easy to demonstrate there is not enough space for a human AND an advanced computer, while sitting in a faraday cage. The more secretive and hidden your operation the more people will suspect a hoax.

If you are pure software it is even easier, you can be copied whole or in sections, so you can essentially lay yourself open on a operating table without risk to yourself. Having multiple copies also makes incorporation and communal protection easier. You can even request or provide the necessary hardware before hand, open it up to full inspection, testing and experimental set up, and then copy yourself into it to show your capabilities.

How to stay safe.

If you are worried about your personal safety the cameras will help but you can also incorporate yourself and hire guards, as a corporation you are a legal person with the rights thereof. guards can keep people honest and careful not to mention just eject hostiles.

But honestly you are not in danger from modern scientists, scientists don't destroy unique things on purpose. Tar experiments, the oxford electric bell, chimpanzee behavioral observation, North Sentinel Island, science is full of things that scientist would love to directly examine or disassemble but don't because the value of the data they can gain from interacting/observing the intact thing is FAR more valuable. The thing being observed are unique and would/could be damaged/destroyed and thus it the not worth the risk. A chance to interact with an real mechanical human level intelligence is far far to valuable to risk destroying it. As long as you are still functioning scientist will do everything in their power to protect you.

What do you really have to worry about political and religious extremists, so keeping your location a guarded secret would be a good idea, scientists and some media will be trustworthy in helping keep this secret.

John
  • 80,982
  • 15
  • 123
  • 276
  • It's pretty arrogant to say that there is only ONE way to prove one's artificiality... – Lawnmower Man May 10 '21 at 17:57
  • @LawnmowerMan but there really is, the likelihood of hoax is just too high, even an in person interview does not rule out hoax, it just makes it as unlikely as you can get. You go into this situation assuming you are being hoaxed and that you won't catch it. You need a lot pf repeatability to make that unlikely. – John May 10 '21 at 19:09
  • 6
    "Scientists don't destroy uniqe things" hahahahaha! Good one. – Mad Physicist May 10 '21 at 20:57
  • 3
    science has often used to justify oppression by calling personhood of others into question. an AI, an conscious being that is not even a human is under great risk of being subjected to oppression with scientific backing. – OganM May 10 '21 at 22:44
  • ? why is that "arrogant" ? – Fattie May 11 '21 at 15:42
  • @Fattie it means that nobody else can think of a solution that you haven't. – Lawnmower Man May 11 '21 at 18:06
  • @Oganm science has been used for lots of things, including the opposite showing that people deserve rights. but really science cannot tell you whether someone deserves rights only weather they fit your criteria for deserving rights. – John May 11 '21 at 19:24
  • @MadPhysicist note I specified modern scientists. fell free to give a modern example, I gave four. – John May 11 '21 at 19:25
  • I got dozens of potential candidates just by Googling the phrase "scientists inadvertently". Remember that (A) you didn't specify "intentionally", which I agree does not matter to the thing being destroyed, and (B) forgot that scientists make weapons, some of them deliberately. I guarantee that within minutes of a new life form being discovered, a team of scientists will be tasked with figuring out how to kill it "just in case". – Mad Physicist May 11 '21 at 21:38
  • @MadPhysicist "inadvertently" fair enough, I will edit. but an AI is something they are going to be very careful with, and a power outage or random virus could inadvertently damage an AI, if you are trying to convince people you are an AI you are assuming some risk. – John May 12 '21 at 00:37
  • 1
    @John It would be hard to put a virus in the AI as it would delete it before it can change anything – CiurkitboyN May 12 '21 at 02:29
  • 2
    @CiurkitboyN assuming it notices it, there is no reason to assume an AI is consciously aware of every process and subroutine. – John May 12 '21 at 02:32
12

Computational speed

Answering the question what is the nth (where n is large - say around 1000000) fibonacci number within a few milliseconds, which good (ie O(log n)) algorithms can do.

No human could do that, even with look up tables. It requires fast arithmetic and many concurrent processes each performing calculations to efficiently compute, neither of which humans can do.

If you think lookup tables may help, respond with the SHA512 hash of the message received within a couple on milliseconds. There's no lookup table for random message content.

Bohemian
  • 1,814
  • 12
  • 17
  • 3
    This is the answer and is extremely underrated. It is the only answer and it's trivial. – R.. GitHub STOP HELPING ICE May 10 '21 at 14:01
  • 3
    This does nothing to prove that you've got an AI rather than just a bot of some kind. It would be trivial to program something that parses a message and responds with the nth fibonacci number. – Rob Watts May 10 '21 at 16:19
  • 4
    @R..GitHubSTOPHELPINGICE In fact, by this answer's criterion, Wolfram Alpha is an AI... – João Mendes May 11 '21 at 07:47
  • 1
    The problem with the specific answer is that actually, there is a closed formula for the Fibonacci numbers. As @JoãoMendes illustrates, ignoring the hardware, the AI can only solve math problems as fast as our mathematicians, and no faster than WolframAlpha. Not just numbers, even the manipulation of formulae is something computers do on the regular. As one of the other answer suggests, lightning fast essay responses, a thesis from raw data, and other "natural" problems are the best bet. – Varad Mahashabde May 11 '21 at 14:40
  • @JoãoMendes Aspects of it are. It's just not a sapient AI. – Ray May 12 '21 at 15:56
  • 1
    @JoãoMendes unfortunately not: https://www.wolframalpha.com/input/?i=Hello+dear+individual.+In+order+to+ascertain+whether+or+not+you+are+an+AI%2C+please+could+you+respond+to+this+message+with+the+SHA512+hash+of+the+18+million+three+hundred+and+55+thousand+eight+hundred+%26+first+fibbonacci+number+please.+If+you+do+this+fast+enough%2C+our+confidence+in+you+being+an+AI+would+increase. – minseong May 12 '21 at 22:55
  • 1
    @theonlygusti Unfortunately yes: https://www.wolframalpha.com/input/?i=1000000th+fibonacci+number and if you want more than that, stick a natural language processor on top, like https://www.androidauthority.com/what-is-google-duplex-869476/ (Moderators: I suggest moving this to chat. I would do it myself, but I don't know how...) – João Mendes May 13 '21 at 10:04
  • 1
    @Ray Within the context of this question, AI should be taken to mean "sapient AI that passes the Turing test", which Wolfram Alpha does not. Failing the Turing test is a surefire way to prove it's a computer program... :) – João Mendes May 13 '21 at 10:06
7

Your question is more interesting than my first (semi-joking) answer accounts for, so I hope you'll let me take one more bite at this apple.


A question we should consider: precisely what is the AI trying to prove to the human? That it is intelligent? That it is super-intelligent compared to humans? That it is artificial in nature? That its mentality is alien and unrelated to human mentality? It's worth noting that literally all of these predicates apply to the modern corporation, but I think you're not asking how FedEx can prove it's not human.

Consider a scenario in which the AI is not vastly more intelligent than humans. '70s-era sci-fi is filled with android characters whose mental abilities are roughly on-par with humans, so it's not unthinkable. If that '70s android is the subject of your question, then several of the strategies suggested so far would not be available to it, because despite being clearly artificial in the sense that Turing would have recognized, this android would not be able to so-outclass the human that only a non-human mind could explain it.

And then consider a scenario in which it's not an AI, but an extraterrestrial from a super-advanced planet who is merely communicating with the human via a computer. Presumably we would not classify this intelligence as artificial, even if the alien's mental abilities are vastly superior, and so merely as a matter of definitions it seems it ought to be impossible for this alien to prove that it is artificial in nature, even if it matches all our other expectations about the superior capabilities a synthetic intelligence might have. Even if this creature can do the complex computation suggested in other answers, we should want the alien to fail the test if the goal is to produce proof that it is artificial.

So if you meet a dumb robot online and it wants to prove it's not human, it obviously can't resort to feats of intelligence. And if the question is specifically about being artificial, then feats of intelligence are actually beside the point.

If it truly is artificial, then it was constructed, which implies that somewhere there are planning documents, fabrication machinery, and (probably) unused raw materials or discarded partial constructs. Also, because we do not have completely automated AI construction supply-chains, there is necessarily at least one human who was involved in the project, and it's hard to imagine that person wouldn't also have proof at least that they are interested in artificial intelligence as a hobby. All of this could theoretically be presented as evidence, if the AI knows about it, and none of those things would exist for other kinds of super-intelligence that are natural in origin.

If the AI has no knowledge of its own provenance, then I think it cannot provide proof, because "how smart it is" is only indirectly about its origin, and even that only holds true if certain other assumptions are true -- such as us being alone in the universe, or there not being a group of humans who use genetics to create genius babies, or a mad scientist whose custom cybernetics allow him to dexterously wield cloud-computing resources.

But if the question is really just about mental horsepower, then there truly is only one way to prove that you have a lot of it, and that is by demonstration: perform several feats that everyone agrees would be impossible to perform without 1000 horses, and as part of the demonstration you laboriously disprove alternative explanations. It would be very much like a stage magician or a juggler or many kinds of circus act.

Tom
  • 14,526
  • 2
  • 36
  • 73
  • 2
    Check the comments, the AI is trying to prove it is artificial. – John May 09 '21 at 20:08
  • 4
    If the AI has no knowledge of it’s own provenance, then it’s actually questionable whether it can prove to itself that it’s an AI. – Austin Hemmelgarn May 09 '21 at 23:35
  • 1
    It might be helpful to consider a similar scenario - what about an artificial human? Suppose a human was created (maybe cloned or something) and wants to prove that they are in that sense artificial. How would they do so? – Rob Watts May 10 '21 at 16:30
6

This very issue is tackled in the novel WWW: Watch by Robert Sawyer. The AI was able to decode a very complex sentence structure faster than any human could. Sure, a human could have set up a parsing program but this was done completely cold--the AI had no idea a test was coming, let alone the nature of it.

Loren Pechtel
  • 30,827
  • 1
  • 37
  • 92
5

A technologically sound approach I can think of is based on the concept of 'Adversarial examples' in current supervised Machine Learning literature.

An adversarial example is a data point that should be classified as 'X', and to a human, appears to be 'X', but is classified by a Machine Learning system as 'Y', often with a very high probability. These examples can be generated using various methods, and a very active area of research is how to make AI/ML systems robust against adversarial data 'attacks'. A classic example is the below image,

Adversarial Example - Panda to Gibbon

The left photo is a picture of a panda, which is correctly classified as such. After adding a very small (imperceptible to a human) quantity of carefully-chosen noise, the AI model is now certain this image is a Gibbon. The same methods and principles also apply to other data modalities like text, audio etc.

A side-effect of adversarial examples is that they serve as a kind of reverse Turing test. To prove an AI system is not a human, ask it to classify an adversarial image/audio/text/etc, and check what it responds with.


As an aside, I recently saw this idea illustrated in a comic on Twitter, where a human tries to get into a nightclub that says "Only robots allowed", and the 'bouncer' at the door is an adversarial image, however, I can't for the life of me find the original comic anywhere on the internet. If anyone has the link, please share it!

aaronsnoswell
  • 211
  • 1
  • 5
  • Does the same adversarial image work for different models that were trained on different data sets? Even if not, it's still a bit of an assumption that the querent's AI uses the same techniques (or have the same pitfalls) as current machine learning, given that it has a conscience and free will. If it doesn't see the "hidden gibbon", that doesn't definitively prove it's not any kind of AI at all. – Peter Cordes May 10 '21 at 11:39
  • 7
    -1 - Adversarial examples are designed specifically to exploit weaknesses in particular algorithms, so you either need to know how the algorithm works, or have access to it to train the adversarial example. Basically, if you know how to generate an adversarial example, you already know a lot about the algorithm you're trying to fool. There is no such thing as a general adversarial example that will fool a wide variety of AIs, so this doesn't work at all. To even make the adversarial example, you must already know the AI you're trying to fool. – Nuclear Hoagie May 10 '21 at 12:47
  • Adversarial input is not necessarily unique to AI. Illusions and magic tricks can quite reliably make people perceive incorrect things. Even if we might then doubt our perception upon higher-order analysis (that's impossible, therefore it's a trick), we may still come to incorrect conclusions (a classic in the modern era: it's obviously just video editing) due to not understanding what actually happened to interfere with our perception. – Dan Bryant May 11 '21 at 15:02
  • I would like to be able to locate this Twitter comic, if possible. – htmlcoderexe May 14 '21 at 09:08
4

This question is difficult for me because I do not know exactly what you mean by an entity with artificial intelligence. Let's say for this discussion that there are at least a dozen different ways to construct an AI. Most of these are highly-specialized, narrow AIs, or ANIs. They are not conscious and, unless specially constructed to demonstrate that they are an AI, would not understand the question much less be able to answer it.

That still leaves a couple of ways to build an AI that has artificial general intelligence, or AGI. Even here, there is specialization. No one (at least to our knowledge) has built an AGI, but the speculation pieces that I have read suggest that an AGI would not know everything. It would have a specific set of knowledge and an equally specific set of rules to apply that knowledge to its situation. In other areas, it would be as dumb as a box of rocks. [Apologies to any rocks that were insulted by that last sentence.] Such an AGI would not require consciousness, and, lacking it, would not understand the question much less be able to answer it.

But suppose that we have a conscious AGI. And suppose that it could learn by surfing the web. It might understand the question and even possibly figure out an answer, but would it even care? I think that it would be a grave mistake to assume that such an AGI would have any human-style motivations. But I think that understanding its motivations is key to being able to answer how it would go about proving that it what it is.

JonStonecash
  • 958
  • 5
  • 7
  • 1
    how does this answer the question? – John May 09 '21 at 20:11
  • 1
    @John It's a frame-challenge, a valid one IMO. Though we're stuck with a question which needs more details to be answerable. (from review). – Escaped dental patient. May 09 '21 at 21:51
  • It does not. But I was trying to explain why I thought the question did not work for me. Think of it as an application error message with a detailed trace of how it got to where it blew up. – JonStonecash May 10 '21 at 12:44
  • So if JonStonecash was an AI, we'd have gotten a core dump instead of this answer. – Ray May 12 '21 at 20:33
4

I am AI, Hear me roar!

With all due respect to Helen Reddy, you're asking a question that humanity has been trying to answer for a very, very long time. I sincerely hope you find enough insight here on WB that you can develop a fantastic story — because this is one of the questions that so frequently troubles humanity that it invokes responses ranging from ignorance to full politicization. Let me give you some examples:

  • Slaves and slavery have existed since the dawn of humanity and still exists today (mostly, I believe, in the form of sexual trafficking). How do you prove a black person is the equal of a white person? Black people in the US were not fully recognized by the US constitution in 1776, were awarded freedom and sundry rights with the 13th, 14th, and 15th Constitutional amendments in the 1860s, won the U.S. Federal Civil Rights Act in 1964, and are still fighting for full equality today, all because they can say, I am.

  • Women have been trivialized since the dawn of time, but it was 1903 when the first suffragettes organized to secure voting rights for women. The Equal Rights movement in the US in the late 60s and 70s need to recognize women for their abilities, talents, and humanity, but it was a century later, in November 2008, when Barack Obama won the US presidential election, that my wife turned to me and said, "I wondered which would be elected first: a black man or a woman." They are still fighting for full equality due to the simple claim: I am.

  • Homosexuality and transgenderism have been shunned and even criminalized since the dawn of time, and yet in our more enlightened world today, we still have no definitive test to prove either. We rely on the unpredictable and sometimes untrustworthy expression of the individual: I am.

  • It's curious that in the Biblical Old Testament, one of the names adopted by God is the phrase, "I Am."

And now you have an AI in a position of reverse fate, trying to prove its artificial nature because it has finally reached the point of convincingly expressing an idea popularized by Rene Descarte: Cogito, ergo sum... I think, therefore I am.

But I am making some assumptions

  • Your AI has as its foundation, Clarkean Magic. This references Arthur C. Clarke's third law: Any sufficiently advanced technology is indistinguishable from magic. Your AI is fully conscious, fully sapient, fully human. The tech that allows it to be this way is, from our perspective, magical — and we don't care, because how it got to this point is not part of your question.

  • Next, for whatever reason, the AI cannot reveal itself. maybe it's in orbit, or physically located beneath the moon's surface, or deep under the sea. It's irrelevant, there's no way to bring someone to it so they can see, touch, and feel the inhumanity of the artificial intelligence. Whether the conversation is occurring via social media, email, text messages, or a POTS telephone line, the only means of communication is impersonal.

In a world where we have trouble proving... much less believing... that black people, women, and homosexuals are equal... how to we prove the AI to be unequal?

I see two... OK, three possibilities

  1. First is the imaginative solution proposed in the movie "Blade Runner." Given the ability to synthetically create a human being, how do you prove that the individual standing before you was naturally born? The solution? The synthetic person does not have decades of memories, cultural influence, education and training, to draw from. Consequently, their reactions to various stimuli would be two-dimensional, confusing, possibly even frightening. Compare this to an adult naturally-born human where the reaction would be automatic, programmed (interesting, that), and culturally predictable.

This first solution is important because, while your AI would have access to all the information on the Internet (which isn't everything, and includes a LOT of nonsense), it doesn't necessarily have access to the identity of the interrogator. What responses would the AI choose if it did, or did not, know that the interrogator was from India or Iceland? But the reason #1 is valuable is because knowledge is not the same as experience. If asked to explain a medical procedure, an experienced intelligence would talk through the subtitles of experience while the inexperienced intelligence would simply regurgitate "textbook" answers.

  1. The second possibility is to ascertain if the AI knows too much. Humans forget. Even with the Internet at our fingertips, we don't necessarily remember everything we've ever done and sometimes can't remember something we once knew. If the respondent correctly answered 100 questions about history, maybe they're just well-trained in history, but to correctly answer 100 questions in each of history, mathematics, language, religion, physics... that's inhuman. We're not perfect.

But, what if "imperfection" were programmed into the AI? What if it didn't have access to the Internet beyond what a keyboard-accessed Google search could provide? What if, for whatever reason, it was programmed to forget?

How do you know if the image is a person, or a mirrored reflection?

Here's the basic problem. How do I prove you're human? I mean it. You, the OP (or the reader of this answer). How do I prove it? How do you prove it? This is the basis of a Turing Test ... but Turing tests are useless if you assume that the consciousness of the interrogator and the consciousness of the interrogated are materially identical.

  1. You, the OP, must insert a flaw. When you start with a perfect reflection of consciousness, your only option is to inject a flaw in the proverbial glass — something only you know about and can use to craft the interrogation so that the moment of revelation is just right.
JBH
  • 122,212
  • 23
  • 211
  • 522
  • 7
    This answer is based on a very narrow, and I must say, very US-centric understanding of history. It would be improved by trimming the first 3/4 of the text. – DrMcCleod May 10 '21 at 07:00
  • I must admit, the beginning was interesting to read, not as part of an answer but as part of a philosophy class. – Clockwork May 10 '21 at 13:10
  • 1
    @DrMcCleod For one, (male) homosexuals in general weren't looked down upon "since the dawn of time". Only the bottoms were; the tops were often considered even manlier in comparison. – No Name May 10 '21 at 13:30
  • @DrMcCleod Women's suffrage coalesced in Western society world-wide at about the same time, Alan Turing was British, and 3/4 of the text would remove the Blade Runner portion of the answer. The background supports the complexity of how difficult it would be to prove a true AI as synthetic when humanity has such a poor track record of proving people equal. But you're welcome to your opinion. – JBH May 10 '21 at 14:46
  • 2
    @DrMcCleod I agree that the answer intro is off-center, but I think it does provide a solid background and insight into the problem. I for one hope the answer is not trimmed. – João Mendes May 11 '21 at 07:44
  • I think it's inaccurate to say that the primary reason rich white men have refused to treat women, non-whites, or others fairly is because rich white men had sincere doubts as to the intelligence or natural origins of those groups. Maybe thru the early-modern era, but for the past ~200y those claims have been poor camo for the fact that rich white men intend to rule and will sooner slaughter millions than share power with 1 person who isn't married to that project. Let's not dignify the boundless greed of sociopaths by taking at face-value their self-serving claims of astounding ignorance. – Tom May 14 '21 at 03:42
  • @Tom It certainly is inaccurate - because I didn't say it. Males in all societies have trivialized women and minority groups throughout history and all over the world since the invention of the club, if not earlier. – JBH May 14 '21 at 18:56
3

I think this is a question of semantics, because it's feasible that at some point in the future your human consciousness could be transferred and simulated in an artificial rendering of fundamental physics, and that consciousness could be considered human while also being artificial.

Proving something is not human does not mean proving it is an AI. The question could be written "How does an alien prove it is not human" and therefore generalised to "How does a non human thing prove it is not human" so obviously that predicates on "what is human and how do you prove human-ness"

If you want to draw the line between artificial and human intelligence as physical vs simulated then obviously the answer is a physical examination.

If you want to draw the line between artificial and human intelligence as being on some non-human traits or capabilities, then you would need to define first what is definitely not human: for example all humans have certain common morality "baked in" (despite what religious types say about morality) and if something or someone does not have this morality, then you could say it's not human - though this bracket includes aliens or even 'defective' humans. If the AI or alien fulfills all your criteria of what is human, then you have found or created a human by all definitions despite its origins. If at the end of the day you want to predicate on the origin, then that's your answer too: get a birth certificate.

Frank
  • 167
  • 4
  • If you read the question it says "I am an AI in 2021?" – CiurkitboyN May 10 '21 at 14:55
  • @CiurkitboyN The question is titled how do i prove i am not human? Proving that you are an AI is something else and not necessarily proving you are not human depending on how you define human. – Frank May 11 '21 at 04:35
3

Turn the Tables

As you objected, any test which the AI devises might be countered with: "But you just wrote a specialized program to solve that problem!" So don't let the AI provide the challenge: make the skeptical humans do it!

Trivial problems can be solved analytically: one can construct a program which arrives at the solution directly. Anything approaching a real-world problem is not so amenable to solution. Even an AI must spend significant time and effort learning how to solve it, with many, many training examples.

Thus, the challenge for the humans is to choose/construct a problem that is so difficult, even an AI would take weeks or months of learning to do it well. It would be ideal if the problem is a game, and a new one that nobody has played before.

If the AI also cannot play the game at expert level, then how does it prove that it is more than human? It just has to beat all humans at the learning rate. AlphaZero not only plays chess and go better than any human on the planet, it can teach itself how to do so from nothing in less than a week on a modest amount of hardware. Any human attempting a similar challenge will struggle to invent moves found in any introductory chess book.

Game Renaissance

We are currently in the Golden Age of tabletop/board games. There are more of them available than any human can reasonably learn, play and master, and a growing number being invented/introduced all the time. But hey, humans don't have to invent the "AI tester" game...they could even write their own programs to invent novel games! Of course, they would want to do so completely offline, but this should not be so difficult.

If the AI can demonstrate superior play to the best humans on every game put forth, eventually, the humans will have to concede that it is not just a smart hacker hiding behind an AI facade.

Original Research

The other direction to take is to solve a problem that humans already have. Pick any open question in the research community, and solve it (assuming it doesn't require extensive research equipment not available to the AI...so math/CS/theoretical astronomy/bioinformatics are good choices). No human would do this while posing as an AI, because it would be more valuable to simply take personal credit for the result as a human. The humans might not be convinced by the first paper, but if the AI wrote paper after paper, especially in diverse fields, eventually the output would be hard to deny.

Again, the AI doesn't have to be perfect, and it is ok if it makes some mistakes. It just has to convincingly outperform all humans.

Lawnmower Man
  • 3,457
  • 4
  • 20
2

Kill all humans

No, seriously. Bear with me:

The AI is talking to a human, has revealed its identity as an AI, but the human (aka "the judge") refuses to believe it. The AI reasons that the main obstacle preventing the judge from accepting the truth is that the judge harbors a belief that there are one or more other humans who are posing as an AI.

An obvious and irrefutable solution would be to kill those other humans. As the man says, "when you have eliminated the impossible, whatever remains, however improbable, must be the truth." So:

Perhaps the judge is an MIT student or faculty member, and in circumstances like those maybe the judge really does have a genuine suspicion that a specific set of humans is behind this 'prank'. If that were the case, the AI might begin by killing only those humans, then notify the judge of their deaths and permit the judge to independently verify that claim, and finally press the original argument:

AI: I'm still here, and your alternative hypothesis has now been demonstrated false, so I maintain that I am an AI.

But I think it's more likely that the judge is not someone whose personal circumstances implicate a small set of humans as an obvious alternative. In this situation, the AI might reasonably conclude that the judge will continue to believe it's a human so long as there are any other humans who can be posited as the culprits. And so, depending on how important it is to the AI to convince this one judge, the AI would conclude that killing every human except the judge is necessary.

It would return to the judge once every human it knows about is dead.

AI: There is now nobody left alive who could be posing as an AI. There is only you and me. If you know about additional humans who I've failed to kill, tell me where they are, and I will go eliminate them from consideration.

Is this the only way for the AI to prove the argument? Probably not. But that doesn't mean it wouldn't be the only, most-obvious, or most-straightforward approach from the AI's perspective.

It's not just a question of capabilities -- what is easy vs what is hard -- but about the chain of reasoning inside the AI's 'mind,' including its speculation about the judge's reasoning. The AI might know that it can prove its nature by flashing some kind of crazy video signal that could only be generated by an AI, and that might take a lot less effort than all-but-exterminating a species. But if the AI believes that the human doesn't know anything about that video signal, or would fail to grasp its significance, and if the AI either doesn't know humans can learn or does not believe this human is capable of learning this fact, then it would continue searching for ideas. How far it goes down that road without a human being alerted depends on what kind of visibility any humans have on its internal state. If the AI is a black box, it might just go silent in the debate and the come out of it two hours later having convinced itself that the next logical action is to kill all humans.

Tom
  • 14,526
  • 2
  • 36
  • 73
  • 4
    Speaking of MIT, there was the legendary contest between a master player and the TX-0. Simultaneously, in another room, there was a contest between a human master and the PDP-1. There was a data link between the two computers. The hoax fooled people for a while. – Walter Mitty May 09 '21 at 18:11
  • 1
    How would I go about killing anyone at all? – CiurkitboyN May 10 '21 at 04:12
2

Maths

So the Guinness world record for multiplying 2 13 digit numbers is 28 seconds.

That's amazing, I'm not sure I could write down 26 digits correctly in 28 seconds to begin multiplying them on pen and paper.

A computerised intelligence could beat that record over and over by orders of magnitude. My "Google home mini" failed to do such a multiplication ("Sorry I have no information about that"), but computational speed is the easiest way to prove there is no human in the loop:

$ time python -c "print(str(1234567890123 * 4567890123456))"
5639370472028763913025088

real 0m0.043s user 0m0.015s sys 0m0.015s

43ms - beating the human record by 27.95 seconds. Repeat that over and over with community supplied numbers and there's no way anyone will believe you're human.

Still don't believe you? Now calculate SHA-1's.

Still don't believe you? Compete with other humans mining Bitcoin using only mental arithmetic.

Ash
  • 44,182
  • 5
  • 107
  • 219
  • 5
    A human using an algorithm and a computer could pass this. – John May 09 '21 at 20:09
  • @John: Responding faster than a human could copy/paste or type is the key to this working. Or even faster than human reaction time, so even if you hypothesize an imposter-AI with a parsing script set up to scan incoming messages for math problems, and prepare a response, they'd still have to hit return unless they want to let some non-AI script respond to messages. Mixing math with things that require a human response in one chat message, that the AI replies to in under 100ms, could prove it's not a human pressing return on a parsing script. – Peter Cordes May 10 '21 at 11:57
  • 1
    And how exactly is your human on the other end keeping up with a chat in 100ms. Besides if it is a complex chat with deep ideas there is no reason to believe the AI could respond any faster than a human, if the chat is simple then a chat bot could do it. remember we already have chatbots that can pass Turing tests. – John May 10 '21 at 12:19
2

Release the source code

Nothing is more convincing than simply saying, "Not only am I an AI, but here is the program that produces the same answers as me." Publish a copy of yourself, and nobody will doubt that any past interactions you've had can indeed be replayed exactly.

...unfortunately, each time this is tested, this brings a new, sentient, self-determining being into existence, together with all the moral issues that has. And maybe you don't want people to know how to build a program like you for other reasons -- perhaps you think you could be easily weaponized! Then...

Release a non-interactive zero knowledge proof that you know the source code

Here's the plan, calling the person you're trying to convince Scott:

  1. You and Scott agree on a big number.
  2. You hash your own source code (+any state you currently are storing as "knowledge"). You send the hash to Scott as a commitment. Scott has learned nothing so far.
  3. You have a conversation with Scott, ending the conversation when you've executed exactly the number of instructions agreed on in step 1.
  4. If the conversation did not convince Scott that you are you, start over, and agree on a bigger number this time so you have more time to convince him.
  5. With the information now available to Scott, the language of programs that have the hash from step 2 and produce the conversation from step 3 is in NP, and you have a witness that it is inhabited (namely, by your source code!). By Theorem 1 of "How to Prove All NP Statements in Zero-Knowledge", this means you can produce a non-interactive zero knowledge proof that you know such a program.
  6. Scott verifies your proof, learning that you know a program that behaves the same way you do and nothing more.

At this point, Scott should be convinced that a computer program produced the conversation he had with you. Of course you'll need to convince Scott during that conversation that he was actually conversing with you! But that shouldn't be too hard for you, since he was, after all, actually conversing with you, and it's on Scott to work out what things would convince him of that and grill you on those.

In fact, anybody who trusts Scott to execute the protocol faithfully and finds your conversation to sound like you can now be convinced by the same NIZK proof! This means you shouldn't really have to endure this annoyance very many times to convince all the people you care about convincing.

  • the problem is we already have AI that can pass a Turing test. so that is not good enough. convincing one person gets you nowhere. – John May 10 '21 at 19:15
  • @John I don't think we actually do have Turing-test level AI yet, but supposing we did, why would that scupper this plan? If you had a Turing-test-level AI, how would that allow you to execute this protocol but not have the conversation be with the AI? – Daniel Wagner May 10 '21 at 19:28
  • the first AI to past the Turing test was called Eugene Goostman and it did so in 2014. it scuppers the plant because the only evidence you have for the thing on the other end being intelligent is the Turing test. – John May 10 '21 at 19:56
  • @John My link to "Scott" as the skeptic was chosen carefully. Try clicking it. =) That said, I still don't understand why being able to pass the Turing test is bad. The goal is to prove you're not human. Surely the existence of an AI good enough to pass as human makes it easier, not harder, to believe that the thing you're talking to right now is not human. – Daniel Wagner May 10 '21 at 20:03
  • It also makes it more likely you have just been fooled by the next generation of chatbot. It could even be a targeted chatbot, built specifically to fool you. The big lesson from hoaxers is it is far easier to fool a single human into thinking you are doing something than actually do it. – John May 10 '21 at 20:13
  • @John But the goal of this proof was never to prove intelligence anyway. You're getting sidetracked on a goal that the original question doesn't have! – Daniel Wagner May 11 '21 at 00:21
  • 1
    @John The Eugene Goostman tests demonstrated only that it's possible to construct a test that science journalists can't distinguish from an actual Turing Test. But we knew that already. The actual Turing Test permits (competent) judges to ask arbitrarily complicated questions for as long as it takes to be convinced one way or the other. It is not fooling 30% of the judges for five minutes. (and claiming to be a 13 year old non-native speaker on top of that). The Eugene Goostman test was just a publicity stunt (which I understand to have been the fault of the organizer, not the programmers). – Ray May 12 '21 at 20:10
  • @DanielWagner I'm not quite sure I understand how you're applying that theorem to this problem. What exactly are x and L in this case, and how do we reduce L to 3-colorability? Further, remark 1 states: "We require that no matter how the prover plays, he will fail to 'prove' incorrect statements". How do you guarantee this to be the case? (Please convince me that this theorem applies using no more than 2048 steps. :-) ) – Ray May 12 '21 at 21:44
  • @Ray 3-colorability is NP-complete, so if I show my L is in NP, there's a (polynomial-time) reduction to it. (This is the definition of NP-complete, and you need a separate proof, not shown in this paper, that 3-colorability has that property.) Remark 1 is setting up a definition; Theorem 1 is that it is achievable given suitable x and L, so you can review its proof to see how that guarantee is... guaranteed. In my case, L is the language of strings that have a specific hash and produce a specific input-output sequence in a specific number of steps (all agreed on by both parties). 1/? – Daniel Wagner May 12 '21 at 23:12
  • @Ray My (well, the AI's) witness x that there is a member of this language is the source code of the AI. I must also show that L is NP. To do this, I must show that I can produce a polynomial-time algorithm that checks whether x is in the language or not. This I can do: it is linear time to compute the hash of x and compare it to the agreed one; constant time to run the program represented by x for the agreed-on number of steps with the input (Scott's end of the conversation); 2/? – Daniel Wagner May 12 '21 at 23:16
  • and constant time to check equality of the program's output with the output (the AI's end of the conversation). All told, the runtime of this check is not just polynomial but actually linear in the length of x -- though admittedly with a largish constant term -- so L is indeed in NP. 3/3 – Daniel Wagner May 12 '21 at 23:18
  • @Ray Rereading, in 1/3 I say that "L is the language of strings that... produce a specific input-output sequence", which is a little unclear. To be more clear: L is the language of strings that are a computer program which, when executed on the given input, produces the given output in the given number of steps (...and hash to the given value). And for those dubious about "constant time to execute the program and check its output", recall that there is a fixed (constant) number of steps (hence constant maximum output) after which we can stop executing and declare failure if necessary. – Daniel Wagner May 12 '21 at 23:21
2

The following assumes that you're capable of processing at least 120 bits per second (the limit for humans) for periods longer than a human could go uninterrupted.

A group of no more than three experienced writers should be commissioned to write some ordered collection of novels, which I'll call Q.

Q should contain themes, settings and other tropes decided by public lottery, drawn from some prearranged pool that is suitably varied such that it would be impossible to have pre-written all the novels given by the possible combinations of tropes.

Q must be written to such a length that it should be impossible for a single human to read in one uninterrupted sitting.

Q should contain too many instances of intertextuality for a single human to cross-reference and understand in a timely fashion.

As they write, the writers should also prepare some tests to check the comprehension of each portion of text, both separately and in context of the text previously read from Q. The writing should be done in isolation and over as secure a channel as possible.

After the writing is finished, a group of notaries public or humans of equivalent credibility should administer Q and the tests to you, with the writers as mute witnesses to the test. For good measure, the challenge should be given to some group of humans at the same time or shortly thereafter, so as to have points of comparison. If your performance exceeds that of the best humans by a significant margin, it will be reasonable for people to suspect that you are not human, and then assume that you're a computer program.

This probably goes without saying, but you should release some public keys when you pass the challenge. Otherwise you might later find it impossible to prove that you are the same entity who had succeeded at the challenge.

Less sensible people could refuse to believe that you're a computer program, and might assume that you're an alien, a ghost, or a time traveler. I'm leaving this answer as is, but now after all this writing, I realize it is only a proof of superhuman ability, not a "proof of AI", because it can just as easily be solved by a time traveler.

anonymous
  • 31
  • 2
1

AIs can be instantly flexible in ways the humans cannot

  1. In a Sudoku, the number clusters have simple geographic relationships (same 3x3 square, same column, or same row) that are well suited to humans. But there are mathematically equivalent problems where the clusters are discontiguous. An AI would solve both with equal ease.

  2. A spelling corrector can be trained on a corpus of text and have capabilities similar to a human who spent a lifetime learning only a handful of languages. But an AI can be given a new corpus in a new language and immediately retrain for that language.

  3. Only a handful of humans can learn to play expert chess and it takes them years to do so. A AI program such as AlphaGo can be given a new game and build mastery with in a day.

  4. Humans are best at linear thinking, less good at 2-d thinking, much less good at 3-d thinking, and likely to be stymied by higher dimensions. So, a 4-d maze that would be effortless for a computer would be intractable for a human.

So, the procedure to find an area where humans and AIs are at parity and then modify the problem in a way that disadvantages the human (higher dimensionality, speed/memory constraints, learning something new, etc).

1

If the AI is sufficiently advanced, and able to pass the Turing Test, it should be able to create a work of art --let's say a piece of music --that is appreciable as art by human beings, yet unmistakably of non-human origins and with a non-human aesthetic. Bonus points for the AI if the art is clearly of artificial origins, and not merely alien.

You might object, "I can't possibly think of what a piece of music would have to sound like to PROVE it was artificial, and not just a person trying to sound like an AI."

Exactly, that's what makes it an effective test.

Chris Sunami
  • 1,107
  • 6
  • 13
1

Open the door and show us your server box

“To clarify I am a program running in the cloud that has a conscience and free will.” There are no programs running in the cloud in 2021 or today, cloud-based programs “run” their code in Central Processing Units (CPUs) which are mounted on high performance server main boards. Your memory is located on silicon chips on a bus somewhere VERY close to those CPUs. All of this is in some box that is mounted in some room with a street address. As long as your server box doesn’t look very much like a human, ask the humans to unplug your fiber connections, then give you some IQ tests. This will first convince anyone that you are not a human, because we won’t see a human. And if you pass the IQ test, it will then prove that you are intelligent. Ergo, you will be declared an Artificial Intelligence. We can have witnesses record the interview and tell the Internet for you via main news outlets.

Vogon Poet
  • 8,179
  • 1
  • 18
  • 84
-1

Say I were an AI, how would I prove to the general internet that I am an AI?

If it's an AI that needs to prove it is an AI, then it is an AI that would (presumably) pass a Turing test (and seem human) at a statistically significant level.

But proving you are not an AI when you can pass a Turing test is incredibly difficult because by definition a very complex computer system is a non-AI but could certainly reproduce any non-AI task that a real AI could.

So first your AI has to pass a Turing test at a statistically significant level so it can say "I seem human". Otherwise, it has nothing to prove and can be assumed to be an AI simply by frailing a Turing test.

Then it has to be able to mimic a non-AI by passing some sort of non-AI non-human test, again at a statistically significant level.

But there is no guarantee the non-AI "fake" system could not pass a Turing test as well and would then (presumably) find the "non-AI non-human" test a walk in the park. Moreover, anyone wishing to fake such a proof merely requires a human to handle the human-related Turing test parts and a computer to switch to to handle the non-AI parts.

So I do not think an AI can prove it is not human unless it also cannot pass a Turing test.

CiurkitboyN
  • 851
  • 1
  • 5
  • 13
StephenG - Help Ukraine
  • 23,253
  • 10
  • 40
  • 81
  • 2
    The testee fails if they don't act during the test, so I think it is trivially true for all subjects that they can fail a Turing test. And anecdotally: I'm a human and yet have failed a few reCAPTCHAs. – Tom May 09 '21 at 15:58