Samsung did a brilliant job by implementing 3D capturing into their NX300 camera. What differentiates their implementation form other implementations is the fact that it is all built around a single lens rather than 2 lenses compared with other manufacturers, and that these lenses can shoot normal photos when set to 2D mode.
Samsung managed to do this 3D mode on the lens by implementing 2 liquid crystal shutters which alternate to produce a tiny difference in perspective which is enough to create a sens of depth using parallax.
Photographers may not see this as new and may not care much about it. After all 3D technology has been around for quiet sometime now, and it seems to be having a problem taking off. On the other hand, software engineers might find it extremely useful if used in a slightly different manner.
WHAT IF rather than creating a 3D image when shooting, we actually create a z-map (or z-Depth) of our snapshot and insert it in the EXIF data as a BLOB (Binary large object) tagged to the original photo. What this means is that for every shot a photographer takes, the camera takes 3. The first is the original shot, and the last 2 are for z-map using the lens shutters.
A z-map is a gray scale representation of the depth of the objects within a frame. The closer the objects to the lens the brighter they are, and the farther the darker they get. See illustration bellow.
As per the current EXIF specs, there is no concept of depth. Most image manipulation software deal with photos as flat 2D matrices of RGB values. Having a z-map tagged on every photo will make many tools in the manipulation software available and work smarter. Take the spot healing brush tool in Photoshop for instance. The way this tool works is by looking at neighboring pixels and tries to figure out how to cover the spot by copying data from these neighboring pixels. This tool does not work well when the spot is on the edge of an object. However, having a z-map, Photoshop will know instantly that some neighboring pixels have different z coordinates and will not use them.
Film simulation software will benefit from such data too since the light roll off on film is much different than on a sensor. Having sense of 3D space will allow for more accurate simulation.
Lens blur filter is yet another example…etc.
Since the parallax on these lenses is slightly small to get true depth, a confidence level should be inserted in the EXIF data to portrait the accuracy of the z-map, and to enable more efficient systems to be developed.