5

In my world, a "perfect" robotic eyeball has been created. This eyeball is electronic, but it emulates a human eye perfectly.

This does not necessarily mean that it has millions of synthetic cones and rods and is physically identically to a normal eye; it just means that whatever it sees, a human with average eye sight would see as well.

A common problem I have seen with cameras is that they never seem to capture the world exactly the way that people see it; there's always something wrong. Whether it's slightly off-color, the brightness or contrast is wrong, etc.

This robotic eye would avoid all of those problems. It would even avoid that weird moiré effect when pointed at a computer screen.


Is this at all possible?

I am aware some people are colorblind; these robotic eyes would see all of the colors the way an average healthy human being without any eye conditions would see them.

Assume that technology is like that of modern-day. Also assume that money and resources are no object.


UPDATE: The eyeball isn't intended to interact with human beings. The whole point is to have robots and cameras that work the same way human eyes do or even better.

This robotic eye would even be able to emulate peripheral vision.


In my story, an eye like this is important for several reasons:

  • Robots being able to understand humans and how they perceive the world
  • Being able to render captured footage (whether on a screen, or with virtual reality, etc.) in a way that can be visualized as though you saw it yourself through your own eyes

Footage through a camera like this should look real. There should not be a "This is just a video, I can tell because X,Y,Z". It should be virtually indistinguishable from reality to human beings. (assuming that high enough resolutions are available)

overlord
  • 6,262
  • 1
  • 20
  • 53
  • Modern technology working on it, but results are far from perfect: https://artificialretina.energy.gov/ In 20 years, with better sensors, computers and nerve interface, it might be possible. – Bald Bear Nov 04 '19 at 18:48
  • 1
    Creating a machine to that degree of specificity would indicate a rather advanced level of technology on the creators part - so I don't see the point in not making the artificial eye even better than the human eye. Why would the creator limit the potential of this invention? – Snowshard Nov 04 '19 at 18:52
  • 1
    Since you said money is no object, then yeah. since organics can do it, so can synthetics. – Greenie E. Nov 04 '19 at 18:53
  • @Snowshard In my world, they are made much better than the human eye, but that wasn't really necessary information for my question. – overlord Nov 04 '19 at 19:43
  • It's trivially easy to make a lousy camera which captures the same bad image as a the human eye of the standard observer; see the Lomography. Color differences are irrelevant -- it just means that the electronic brain of the camera or the biological brain of the photographer chose a different white balance than what you would have chosen; and white balance correction happens after the image is captured anyway. The simple truth is that most cameras costing more than a few dollars take much better images than a human eye could possibly hope. – AlexP Nov 04 '19 at 19:52
  • 1
    @AlexP the vast majority of cameras have much poorer dynamic range than the human eye. – Starfish Prime Nov 04 '19 at 19:52
  • @AlexP I think you might be underestimating the complexities of the human eye. – overlord Nov 04 '19 at 19:57
  • @StarfishPrime: The human eye can adapt to a large dynamic range, but the dynamic range in any given image is not all that great. You are being misled by the way human vision works. I will try to condense in this restricted space. The image perceived by the brain is composed of multiple images taken by the eye, plus very heavy postprocessing; see saccade for one aspect of this. You can get the same effect from a camera by taking 100 pictures per second with varying sensitivies, and the postprocessing them. The postprocessing is out of scope. – AlexP Nov 04 '19 at 19:58
  • @AlexP getting the same effect by using a fast framerate and applying postprocessing doesn't have anything to do with the camera itself. It is a way around the inadequacies of the input device. – Starfish Prime Nov 04 '19 at 20:01
  • In optics those "complexities of the human eye" are called defects. The eye is a very poor camera. The beauty of human vision is the heavy post-processing in the brain. – AlexP Nov 04 '19 at 20:01
  • @StarfishPrime: But that's what the brain does with the images captured by the natural eyes... – AlexP Nov 04 '19 at 20:02
  • @AlexP Then I supposed what I'm really looking for is a device that does not require the post-processing step. It should capture images in a way that they already look like what the brain would make them look like after post-processing. – overlord Nov 04 '19 at 20:03
  • @AlexP however, the eye has a larger dynamic range than the vast majority of artificial cameras. Regardless of the postprocessing. – Starfish Prime Nov 04 '19 at 20:03
  • @StarfishPrime: Citation needed. A hundred dollar camera has easily a potential dynamic range of 12 stops; a decent 500 dollar camera goes to 14 stops easily. Are you sure that you are not confusing the dynamic range of the camera (which is available to you when you take a RAW image and render it later) with the dynamic range available in the output format (usually sRGB)? – AlexP Nov 04 '19 at 20:06
  • @AlexP a hundred dollar camera might output that level of claimed sensitivity, but it absolutely does not have that sensitivity in a single frame. Outputting 12+ bits per pixel is easy. Proving 12+ bits of actual useful information is vastly harder. My day job involves working with a lot of bits of compact camera hardware. Even the expensive ones are pretty poor. – Starfish Prime Nov 04 '19 at 20:08
  • 2
    "Capture images in a way that they already look like what the brain would make them look like after post-processing": the small problem being that we don't know what that is. The major difference between a photograph and a live image is that in the photograph the post-processing and rendering is done twice, once by the camera and once by the brain. Since we cannot pipe the image from the camera directly into the brain, you will always have the issue of the double post-processing. – AlexP Nov 04 '19 at 20:10

3 Answers3

15

Is this at all possible?

Yes. Eyes demonstrate that such a thing is possible. Making photosensors equivalent to our rods and cones is a Simple Matter Of Science And Engineering.

Assume that technology like that of modern-day

No. We can't even transplant a real eye yet. Emulating a whole eye in hardware and splicing it into an optic nerve is clearly a more difficult task.


edit following question update

The whole point is to have robots and cameras that work the same way human eyes do

You've conflated two things here... having cameras that give identical performance to human eyes, and having a vision system that works as well as a human's.

The camera bit is an engineering problem... getting the exact same colour response might be a bit fiddly, but entirely solvable. Getting the same dynamic range would be harder, but there's no obvious reason that could not be solved either.

The vision bit is an AI-hard problem, and with all the power and money in the world we haven't been able to crack it. Eyes are, for the most part, a bit rubbish. They work so well for us because they are attached to a fairly powerful bit of image processing wetware.

or even better.

Then don't make them work like human eyes. What's the point? You're not piping the output into a human brain, so there's nothing to gain.

Starfish Prime
  • 74,426
  • 11
  • 150
  • 313
  • I see the confusion... I have now removed the [tag:anatomy] and [tag:biology] tags. – overlord Nov 04 '19 at 19:47
  • The point is so that robots are able to exactly understand humans and understand the way humans perceive things. – overlord Nov 04 '19 at 20:20
  • 2
    @overlord you'll be wanting a perfect human brain simulation then (and perhaps more importantly mentioned this in the question). Weirdly-fancy-yet-compromised cameras ain't gonna cut it. And if you wanted perfect human emulation, why did you ask for the "even better" bit? – Starfish Prime Nov 04 '19 at 21:07
5

We can already make cameras way better than human eyes

Human eyes are piss poor optical devices. Our problem with transplanting eyes is the electronics and neural connection not the eye itself, that part is easy.

You don't want a camera that sees as well as the human eye, the human eye is shit, it is full of artifacts, distortion, and limitation that have to be edited out by the brain. the moire effect can be solved just by not using digital output for your camera or scanning, just have it output continuously and have dedicated integration after the effect. Human eyes feel better because our brain is used to our existing editing if it has to do more it stands out. If you transplanted an eye from one human to another it will also feel horribly distorted until their brain learns to edit the information coming from the new eye.

Color is even more obvious, human color vision is crap, cameras can be built to detect many many discrete colors while the human eye/brain has to composite together what is basically only 2.5-3 vague colors. Even many animals (birds and reptiles) have better color vision than humans, just because they see base 4 colors. we build cameras to compliment human eye color bias but we don't have to build them that way.

Don't confuse the average video with what video is capable of, most video suffers because they are projected on a flat surface which then has to be resenses by our eyes, while normally our eyes see independent 3D objects, it is an issue of playback not the cameras.

Now the human eye does have a few advantages a curved sensor combined with a curved lens gives less distortion, and only the highest end cameras can compete with human pixel density (not that we use it all). but those are all fixable. But most of what you mention is software not hardware, peripheral vision for instance is software not hardware.

John
  • 80,982
  • 15
  • 123
  • 276
2

No, it's not possible, but only because you defined it that way.

You focus greatly on an optical device with precisely the optical behaviors of an organic object. As a general pattern, this cannot be done. Given a particular metric as to what sort of imaging you want, you may be able to do better, but to do precisely the same as an organic eyeball calls for an organic eyeball.

This answer is predicated on your requirement of not post-processing the image. With that requirement in place, you're basically stuck. The particular quirks of any organic system are hard to mimic. It's much easier to post process the effects in.

For example, consider the rods. The pigment responsible for our low light vision is Rhodopsin. It's in a fantastic feedback loop with very precise behaviors. A flash of bright light ruins the calibration of these systems, which is why you lose night vision if exposed to a bright light. It's basically insensitive to red, which is why red lights don't hurt your dark vision. Once tuned, these rods are so sensitive that biologists seriously consider that it might be meaningfully sensitive to single photons.

This precise behavior will be almost completely impossible to replicate. It's too exacting of a system. What we would do instead is come up with a system with better performance than rods and their Rhodopsin, and post-process some of the fidelity away.

Any robotic eye meeting your criteria would be, in fact, a synthetically grown human eyeball grown from human cells, just like ours. That's just the best you can do.

As an aside. You mention the color is wrong. Most of the time the color is more correct than it is for your eyeballs. What you're experiencing is the extraordinary color balancing abilities of our visual cortex. The visual cortex covers for the eyeball a lot!

Cort Ammon
  • 132,031
  • 20
  • 258
  • 452
  • This answer was very helpful. So the easiest way to get what I want would be to have the camera do post-processing, then? – overlord Nov 04 '19 at 20:30
  • 1
    That's where I'd go. I'd argue it's also where the brain goes as well. The amount of post-processing it does is utterly incredible. – Cort Ammon Nov 04 '19 at 20:32