Flawed and Forced Perspective
- celestial body
- Dec 10
- 3 min read
No post production VFX or AI models are needed to fool us, and despite all our self assured statements about how "we can always tell what's real" our egos cannot alter the path of light to our eyes.
The moment our world is captured by a single lens it is translated in to 2 dimensions, and in that transformation our ability to tell what is real falters.
This is because we rely on binocular vision to most accurately determine depth, the differing position between our two pupils creates a disparity in the images as our brain merges them, these points of disparity create contrast that helps our brain in assuming depth. The brain factors this in alongside the contraction of muscles that dictates the pupils degree of convergence -- how much your eyes are rotated inward to focus on things close to you. When we film with a single lens we are essentially removing one of the eyes that captures that scene, and the depth cues that come with it.
Now if you're reading that thinking "but I can still tell the depth of things in films and on tv" you'd be absolutely right. That's because we have multiple depth cues outside of our binocular vision, but it is also largely because most content isnt trying to trick your depth perception. However if they wanted to they could, they'd just have to consider and account for those other depth cues; This includes the appearance of converging parallel lines, the overlap and relative size of objects, and most difficult of all, the motion parallax.
Those first few things are pretty easy to adjust for, you just make some props and carefully plan your set dressing, its the motion parallax that makes things difficult. Yet by having a static shot you don't introduce any parallax rendering the issue null, and with repeatable programable camera moves accounting for that parallax by moving both parts of the set and camera in sync is very possible (though not as simple as im making it sound)
Look the point of all this isnt to give an intimate working knowledge of how to perform these camera tricks (if you want that go watch the Fellowship of the Ring BTS Appendices followed by the Corridor Crew video on this topic).
My point is that when all these things are accounted for, the image captured in camera is imperceptible to magic in the way it warps reality.
I know some of you are shrugging at this, in a world of CGI and deepfakes you're wondering where the importance is in making something look "real" in a movie scene, when computers make that happens all the time. But its not the fact it looks real in the final rendered scene that matters, its the fact that it looks real in camera, live. Hell it would look real if you closed one eye and lined yourself up with the cameras intended position (and focal length).
I find this fascinating not simply because its an incredible practical film effect (and we all love practical effects), but because it demonstrates just how malleable our perception of the world is. Perhaps more importantly it highlights how vulnerable our perception of the world has become, in an age where our outlook is increasingly dictated by the images we see on screens, the fact that those images may not represent reality in an imperceptible way, undetectable in metadata or light channel analysis, is something we should be aware of.
We often hold technology up as a shield against itself, but we need to remember that there is a world beyond that, our organic hardware has vulnerabilities that technology cannot shield us from. Knowing our faults is the only way to account for them.
At the end of the day, anyone's' perspective can be forced.
Comments