Midweek mobile 15

Modern phone cameras get sharp and bright images with awful lenses and jokes of sensors. The most important aspect of the images is that they are usually viewed on the small screen of a phone. A quick search led me to an estimate that people take 4.7 billion photos every day. Be suspicious of such facile estimates. But it is clear that far less than a percent of a percent would be viewed on a large screen, where defects can show.

I stress tested my phone camera in exactly this way. My phone has a sensor with 4608 x 3456 pixels. I reduced it to 1667 x 1250 pixels for the leader photo. That looks good. But I asked what it I looked at it pixel for pixel: one pixel of the sensor for every pixel on the screen (1:1). I did that in the most detailed photo in the slideshow above. The next one compressed 4 pixels of the photo into one on the display (4:1), the next 16 pixels of the photo for one on the display (16:1), and the next (the featured photo) is shown 32 pixels per pixel of display (32:1). But for the post I compressed these views a little more; the closest is at 9:1, the rest are 36:1, 144:1 and 288:1. The result begins to show digital artifacts in the 9:1 view, although they are not overwhelming (at 1:1 they are unmistakable). Of course, I can’t predict what screen you’ll see them on, but if you have a choice looking at them on the biggest screen you have would be interesting.

On a whim I took a photo of a beetle and gave it the same treatment. Here you see the views in the ratios 9:1 (nine pixels of the photo to one of the display), with the successive frames showing 36:1, 144:1 and 288:1 compressions. It is only the last which looks sharp. On my phone the display is even smaller, so the image looks much sharper. But why this big difference between flora and fauna? I compared the exposure first. The flowers are taken with an equivalent exposure of 1/100 seconds and ISO of 100; the beetle with 1/50 seconds and ISO of 223. This means that the number of frames which are superimposed to give the final image is twice as many in the second. Slight hand movements could create the effect that you see, but the phone must compensate for that. But the ISO is also a factor; you can see more “grain” in the image of the beetle. I think another important factor must be the contrast between the object and background. That’s much smaller in the second photo. I’ll try to explore this further.

If you want a moral, I would say “Don’t look a gift horse in the mouth.” Your phone does not replace a good DSLR in image quality. Be happy with what it shows on its small display.

Phone photography changes our expectation of the interaction of camera hardware and image so dramatically that it is worth rethinking what photography means. I intend to explore this a bit in this series.

Midweek mobile 12

Push anything to extremes and flaws will begin to show up. Cell phone cameras now boast photos with around 100 million pixels, but taken with toy lenses a few millimeters across. The images that the phone gives you are the result of statistical computations based on fuzzy data. The enormous amount of computation needed for building an image (yes, images are now built, not captured) drains your phone’s battery faster than other uses would. How do you actually get to see the flaws of the optics? One way is to push the camera to extremes. Here I look at low-light photography, so that the camera’s ISO boost begins to amplify the problems that the image always has.

The featured photo used the 4.7 mm lens at the back of my phone, and used an exposure of 1/13 seconds (this means averaging over an enormous number of captures). The original image had 9248 pixels on the long side. When I compress it down to 1250 pixels for wordpress, the result is a crisp picture. Examine it at larger scales though, and flaws emerge. The detail shown in the above photo takes a segment which is 830 pixels on the long side and compresses it to 640 for wordpress. The camera chose an ISO of 15047, and there is quite a bit of noise in the detail. You can see lens flare below the arch. Above the arch you can see some of the railing blown out. The pixels are saturated and nothing you do can bring information out of them. Elsewhere, the railings are full of digital artifacts such as aliasing.

In the slideshow above you see an even more extreme case. This is a photo taken in a dark wood on a new moon night looking for owls using flashlights (yes, this was how I spent my diwali). The camera chose an ISO of 17996 and an exposure of 1/10 seconds. In the most contrasty bits of the photo you can easily see the noise in the image even without sliding into the detailed view. The lens flare in the detail looks cloudy; the AI has tried to erase it without success. It has performed surprisingly well in the face. I’m really impressed with the technique of computational super-resolution that it applies.

I close with a less extreme example from earlier in the evening. Here the software chose an ISO of 844 and an exposure of 1/25 seconds. Details are less grainy, as you can see when you zoom into the picture. The road signs are quite clear, if a little too dark to read easily, but the darker areas of the photo have clear digital artifacts, some of which you can see in the zoom. But you can see the liquor shop in its prize location at a crossroad blazing with light, open to its business of making the roads less safe to drive on.

Phone photography changes our expectation of the interaction of camera hardware and image so dramatically that it is worth rethinking what photography means. I intend to explore this a bit in this series.

Midweek mobile 2

A mobile camera is not a good camera in ways that photographers were used to thinking of. The lens is a toy. Four centwp-admin/wp-admin/wp-admin/uries worth of lens technology have been junked by two related developments. The most important? That about 95% of the world looks at photos on tiny screens when distributing likes. So you don’t need the sharpness that photographers of old wanted; sell megapixels instead. That translates to about 10 Mbytes for the featured photo when my camera saves it. I know from experience that even on my large screen I can easily compress it down to about 200 kbytes and most people would not be able to tell the difference. That means I can retain only 2% of what is recorded. And on my phone I could easily throw away another 90% of the information (retain just 0.2% of the original) and no one would be able to tell. Then why so many megapixels? Because when you start from a large format photo and compress it down to a small screen, everything looks sharp.

You might remember that when you last changed your phone the picture quality changed a lot. Is that all due to more pixels? In a big part, yes. I dropped my old phone too often and was forced to change it quicker than I normally do. In three years the number of pixels in photo from a less-than-mid-range phone had gone up from around 10 million to about 65 million. Now look at the featured photo. The architectural details look sharp, considering that the subject is more than 300 meters away, and it was taken from a car that was making a sharp turn at a reasonable speed. But look at the near-full size blow-up in the photo above. You can see that at this zoom, details are pretty blurred. I have gained the clarity of the featured photo purely by not looking at it at full scale.

But that’s not the only change when you get a new phone. You also get a different AI translating the sensor output into an image. And this technology, which is a guess at what is being seen, is improving rapidly. As a result, the distortions of a bad lens can still be interpreted better, and result in a reasonable image. Note that this phone can remove digital noise much better than a five years-old phone would have done. The darker areas of the photo are much more clean (the detailed view above has been cropped out in the featured photo). Also, notice that the new generation AI deals with non-white faces better than before, getting an impressive image for the man walking towards the camera. This improvement is a response to accusations of biased training of AI.

But another detail is technically very impressive. Notice the level of detail? I can see very clearly that he is not wearing a mask. This resolution is better than a fundamental limitation which is imposed on lenses due to the wave nature of light (something called Rayleigh’s resolution limit). This computational super-resolution is a statistical trick which improves the image by making a guess about the nature of the ambient light. The down side of all this AI? This much of computation has a carbon cost. When I use my phone only for communication, the batteries last three and a half days. Street photography can drain the charge in a few hours.

Phone photography changes our expectation of the interaction of camera hardware and image so dramatically that it is worth rethinking what photography means. I intend to explore this a bit in this series.

Midweek Mobile 1

Night time is the worst time for photography if you have a tiny lens. Anticipating crowds and rain on the eve of Independence Day, I went out to get some street photos, but only took my mobile phone. It does a lot of computation to deduce the shapes and colours of what is recorded. With all that computation that goes on between the light hitting the sensor and an image being saved in memory, newer and faster computational hardware has an advantage.

But did these results actually improve over the physical limitations of the small lens? In one sense they did. If and when the sensor and imaging involve chemistry, a small lens exposes less of the chemical on the film. The result was that photos look dim. We are used to saying under-exposed for such photos. The only way to make the image brighter would then be to expose the photo for longer. But that creates a problem we call motion blur. With computation sandwiched between the sensor and image, there is a third way: the brightness can be amplified. I saw that The Family gets a much brighter image with her phone than I do, because her camera software is set to amplify more. So the problem of under exposure is replaced by that of digital noise: when you amplify, both signal and noise are usually amplified together. Motion blur can still be seen though, in the featured photo, for instance.

In another sense, the limitations of a small camera remain. A lens which is half a centimeter across cannot see details smaller than a couple of millimeters at a distance of ten meters. But this fundamental limit of resolution is reached only when the sensor collects light forever. With limited exposure the resolution drops by a factor of ten or hundred. So the image always has to balance motion blur against lens resolution. You can see this at work (at least on a large screen) in the photo above. The scene was well lit, the camera was not in motion, but the image is not awfully sharp. The computational hardware has prioritized freezing the movement of people by sacrificing the light needed for better resolution.

I suppose these photos look sharp and bright enough on phones and tablets to gather likes on instagram and tiktok. Perhaps you are in a minority if you view them on larger screens. As it turned out, it didn’t rain, so I could have taken a better camera with me. But technique is what you develop when you have limitations. A mobile phone is less obtrusive when you want to take street photos, so it is a good idea to start using it more widely for serious photography.

Phone photography changes our expectation of the interaction of camera hardware and image so dramatically that it is worth rethinking what photography means. I intend to explore this a bit in this series.