Panoramas have had a cult following since before the early days of photography. Special cameras and odd format film plates were developed for this purpose even in the 19th century CE. I came to the field in the early days of commercial digital photography when enthusiasts would spend days stitching together individual exposures with the early versions of photo-editors. When I tried my hand at it I rapidly realized two important points. First, as you rotate the camera, the illumination changes, and you have to compensate for the different quality of light in the editor. Second, as you image the same object from slightly different angles, its shape on the film changes slightly, and you come across mismatched edges when you try to stitch the images together. You can understand this as a simple problem in perspective, but it was hard to compensate for it with the photo editor tools that were then available.
Now, a smart camera does all this “in the box” for you. On my drive in the Sahyadris, I stopped near a village and took a panorama of a wooded cliff above rice fields. All I had to do was stand firm, hold the phone in my hands and twist my torso smoothly. The demon in the box took care of the rest: the shading of light across the image, and the automatic adjustment of the distortions of objects due to changing angles. The second step, the one which was hard to do by hand, has a simple mathematical representation (three-point perspective) which the AI solves rapidly. The result seems to be good enough that you can wrap it around the walls of your home.
But my phone records the panoramas only in 40 megapixel images, not the 65 megapixels which it uses for other photos. This is due to the physical limitations of the small aperture and sensor which the phone camera uses. I’ve discussed earlier how multiple frames are read out from the CCD and averaged to produce a single image. The same thing is being done for panoramas. But since the camera is moving while the image is synthesized, the number of frames available for a single segment is limited. When you examine a regular image and a panorama at the same scale, you can see this clearly. In the image comparison above, both photos use the same number of pixels from the original image. I can zoom less when I use a panorama. This is the result of using a smaller number of frames for image averaging in the panorama, and also the restriction on computational super-resolution imposed when using the smaller number of frames. So really, you cannot wrap the panorama from a cheap phone camera around the walls of your home. At least not until the sensor and frame-readouts, or the processor and algorithms improve.
Phone photography changes our expectation of the interaction of camera hardware and image so dramatically that it is worth rethinking what photography means. I intend to explore this a bit in this series.