I have spent countless hours trying to figure out what is the process of photography. How does the eye and mind connect? I am not talking about neurology here, but rather a higher level, kind of primitive understanding of the simple act of taking a photo. I do this by a simple observation of me the photographer taking a photo. It goes something like this: I get interested in a scene in front of me. I don’t know why and I don’t ask why. I try to visualize how I want to take this shot by deciding on the frame which includes a quick observation of the light, then the depth of field, I decide on the key, I look at the geometry, the lines, make sure the point of interest or the idea is clear and then I take the shot. Now this may not seem a lot. However if you start breaking every step down into observing the 2 available scenarios ‘now I am thinking’ and ‘now I am reacting (or feeling)’, things start to get wired and taking the shot becomes an impossible task. Thinking blocks the emotions and art without science tends to be not very interesting. Go figure. Luckily statistics works to our favor. If we shoot 5,000 photos just randomly, there’s got to be one that is good. You have to be the unluckiest person on earth to not have one that is good out of the 5,000. This is what keeps me going. Don’t Stop. I am messing with you. Keep reading…
By definition the art of photography is a process of selection or mostly elimination based on emotional responses. Therefore it is safe to assume that everything else is science. When and where does the art start? When does it end? When does the science take over? How do we swing back and forth between both poles?
Can I skip this science thingy and become a pure artist?
This camera concept answers this last question. Through the help of machine learning and computer vision an advanced camera should be able to quickly analyze the frame and visually overlay the scene with additional metadata describing the frame dynamics, tensions, alternative framing of the scene, visual path, juxtaposition, clarity of point of interest, patterns… etc. This should be done in real-time and on demand.
This might sound like science fiction but the truth is that what I have described above is very scientific. These technologies are already available in modern cameras in a limited form. Some cameras do scene matching to calculate exposure in complex wide dynamic range scenes. Many cameras can detect faces to nail the focus. Why not detect every object in the scene. If it does not recognize the object, teach the camera how to recognize it. Who knows perhaps you may share what you taught your camera with your friend’s camera over Instagram if he pays for your coffee. Pass him your talent over NFC for 10 minutes only and let it expire afterwards. Then sell it as an in-camera purchase at a high price because he got a taste of it and he liked it.
All photos at the back of the camera are taken from the Zeiss website and are copyrighted.