In the last sixteen weeks I’ve said more or less everything that I can say at this time about taking photos with a phone. So I will end this series today with an upbeat message. The featured photo of a Yellow-tailed tussock moth (Somena scintillans). The light was good, and the camera has managed to put together a lovely photo of the moth. This is not very easy, since it sits with its wings folded into a high peak. To get a photo like this with another camera, I would have to do a bit of focus stacking. With its multiple fixed lenses, the phone has done that, and given me an image which is as sharp at the peak of its wings as it is at its hairy legs. The colour is also rendered beautifully.
What the phone did here in good light is possibly the future. I hope that in the next few years large sensors become cheaper. If that happens, then I’m sure in a few years we will get used to taking photos like this under all light conditions with the little multipurpose box and tracking device in our pockets. I would like to end this series of posts with that hope for the future.
Phone photography changes our expectation of the interaction of camera hardware and image so dramatically that it is worth rethinking what photography means. I intend to explore this a bit in this series.
Sunlight on pines and grassland, mountains behind. It was a lovely scene which I captured with my phone. The phone used a lens which is 4.7 mm wide and has a fixed aperture f/1.7. It reported an exposure of 1/1043 seconds and an ISO of 100. The sensor on my phone has 4608 x 3456 pixels. This is an aspect ratio of 4:3, which I’ll retain in all the experiments I show here. The original jpeg image the phone gave me had 9248 x 6936 pixels. I compressed the image down to 1250 x 938 pixels in the header photo. It looks rather nice on my phone, and also on my laptop screen. The image has areas of bright illumination and areas of pretty heavy shadow. It also has some sharp colour contrasts. I was interested in how well the images look when I zoom in so that one pixel in the photo shows up as a single pixel on my laptop’s display.
Here is a zoom into brightly lit pine needles. I took a section which had 832 x 624 pixels and reduced it 640 x 480 pixels to show you in this post. All the following images also do exactly the same. You can see lots of digital artifacts. The most noticeable is aliasing: smooth lines and curves appearing jagged in the image. The software has teased out a lot of detail both in shadow and in full light, but the jaggedness makes it look somewhat artificial.
Next I zoomed into a portion of the dappled shadows. This is where your own eyes will play tricks. The camera captured almost nothing in the deepest shadow and the brightest light, but it does quite a good job even in the lighter shadows, apart from the aliasing problem. The best parts of this zoom are the portions where there is a strong contrast of illumination: bright details against dark background. But where a dark portion is seen against a bright background you see strange curves and squiggles. This is due to aliasing.
This zoom shows you a situation where the contrast is in colours, not so much in the level of illumination. Both the sky and the leaves are bright. I’m surprised by the amount of digital noise in the sky, in spite of the ISO being 100. Apart from the aliasing problems, I’m surprised by how soft the pine needles look. This is caused by a problem I’d written about earlier. The image is created by adding together a very large number of separate exposures (a technique called adaptive multiframe image averaging), and the breeze at that height causes the pine needles to move. The softness is due to the motion between different exposures. This is not a problem that a DSLR has; not does it ever have this digital noise in the sky.
I also found incredibly bad digital artifacts in a portion of the photo which looked pretty easy to take. The squiggles in the far slopes are due to aliasing. The strange halo around the shadows is another weird algorithmic effect. The light on the branches is like little bits of paint dabbed on by a bad painter trying to emulate the impressionists. The pine needles are just masses of colour. This zoom makes me think I should never again look at a phone photo blown up to see it pixel for pixel.
If I want sharp details, I should use a DSLR. A phone is what I would use if I wanted a quick snapshot which I would look at only on a little screen which fits in my palm. Conversely, if you want to see the defects in the phone photo, look at these examples on a big screen, not a phone.
Phone photography changes our expectation of the interaction of camera hardware and image so dramatically that it is worth rethinking what photography means. I intend to explore this a bit in this series.
Modern phone cameras get sharp and bright images with awful lenses and jokes of sensors. The most important aspect of the images is that they are usually viewed on the small screen of a phone. A quick search led me to an estimate that people take 4.7 billion photos every day. Be suspicious of such facile estimates. But it is clear that far less than a percent of a percent would be viewed on a large screen, where defects can show.
I stress tested my phone camera in exactly this way. My phone has a sensor with 4608 x 3456 pixels. I reduced it to 1667 x 1250 pixels for the leader photo. That looks good. But I asked what it I looked at it pixel for pixel: one pixel of the sensor for every pixel on the screen (1:1). I did that in the most detailed photo in the slideshow above. The next one compressed 4 pixels of the photo into one on the display (4:1), the next 16 pixels of the photo for one on the display (16:1), and the next (the featured photo) is shown 32 pixels per pixel of display (32:1). But for the post I compressed these views a little more; the closest is at 9:1, the rest are 36:1, 144:1 and 288:1. The result begins to show digital artifacts in the 9:1 view, although they are not overwhelming (at 1:1 they are unmistakable). Of course, I can’t predict what screen you’ll see them on, but if you have a choice looking at them on the biggest screen you have would be interesting.
On a whim I took a photo of a beetle and gave it the same treatment. Here you see the views in the ratios 9:1 (nine pixels of the photo to one of the display), with the successive frames showing 36:1, 144:1 and 288:1 compressions. It is only the last which looks sharp. On my phone the display is even smaller, so the image looks much sharper. But why this big difference between flora and fauna? I compared the exposure first. The flowers are taken with an equivalent exposure of 1/100 seconds and ISO of 100; the beetle with 1/50 seconds and ISO of 223. This means that the number of frames which are superimposed to give the final image is twice as many in the second. Slight hand movements could create the effect that you see, but the phone must compensate for that. But the ISO is also a factor; you can see more “grain” in the image of the beetle. I think another important factor must be the contrast between the object and background. That’s much smaller in the second photo. I’ll try to explore this further.
If you want a moral, I would say “Don’t look a gift horse in the mouth.” Your phone does not replace a good DSLR in image quality. Be happy with what it shows on its small display.
Phone photography changes our expectation of the interaction of camera hardware and image so dramatically that it is worth rethinking what photography means. I intend to explore this a bit in this series.
From darkness to light; I tried to take my phone camera to another extreme when we stopped at a cafe for our elevenses. The phone sensor has 4608 x 3456 pixels. The software multiplies each sensor pixel into 4 image pixels to give me 9248 x 6936 pixels in the image. I reduced it to 1667 x 1250 pixels for the leader photo. The sunlight streaming in through the window, refracted in the glass of the bottles and through the water, or reflected on the coffee would give the camera quite a spin I was sure. The leader looks very good; the resident AI is designed to build images which look good compressed down.
Click to see the full photos
If you want to look behind the screen all you need to do is to look at higher magnification. Here I’ve taken two parts of the photo and examined them at two scales with an aspect ratio of 4:3 in landscape orientation. The lesser zooms show an area which is 3100 pixels across (reduced to 640 for this post). The more extreme zooms show an area which is 780 pixels across (reduced to 640 for this post). You can see lots of digital artifacts: aliasing appears in oblique lines, there is a lot of digital noise in the shadows in the coffee cup (even though the ISO is only 105), but most of all, the really bright areas are not only blown out, but also very irregular. Look at the caustic lines on the surface of the coffee in the image, and then hold a coffee cup up to light. You’ll see that the bright reflections never look so irregular in real life.
Click to see the full photos
All these artifacts are due to the fact that the lens is about 3 mm wide and the sensor has pixels which are just a micron across. They never manage to gather as much light as a good camera does. A lot of computation is used to compensate for that. It looks good when viewed on a phone, but when you look at it pixel for pixel it is full of flaws. I think I’ve concluded that if you are a serious photographer looking for a phone camera, then do not look at distractions like how many megapixels an image has. Look instead at the specs and buy a camera with as large a lens as possible and a sensor which is big. Measure these sizes in mm or inches, not in pixels or f numbers.
Phone photography changes our expectation of the interaction of camera hardware and image so dramatically that it is worth rethinking what photography means. I intend to explore this a bit in this series.
More lessons can be learnt from the experiment I reported last week: push the performance of my phone camera to an extreme by doing very low-light photography. The camera spews out 64 Megapixel images (9248 x 6936 pixels for each photo). I took a segment which was 3300 pixels on the long side of a 4:3 aspect ratio and reduced it to a 1250 x 938 size for use in this blog. (All photos here use this aspect ratio without further comment, and I quote only the pixels on the long side.) That’s the featured image. We were looking for owls in a dark woodland using a flashlight on a new moon day, and the only lighting on the subject was its reflection from leaves. Not a bad photo given that, you can see the photographer, his shirt, the camera, and his hat. The amazing thing about the photo is its ISO of 17996! That’s the only way that the phone has of getting an image using a 1/10 s exposure with a lens that’s less than 5 mm across.
The photos that you see above comes from zooms to 830 pixel wide areas, subsequently reduced to 640 pixels across for use in the blog. The lighter image is taken from near the collar and arm of the shirt in the featured photo, and the darker shows the barrel of the camera. I’m not surprised by the lack of detail, the colour aberrations, and the enormous amount of digital noise in the photo. There was hardly any light at all to begin with. How did the camera actually manage to get anything useful with that incredible ISO?
Part of the answer is the Sony IMX471 CMOS sensor that’s used by my phone. The sensor has 4608 x 3456 pixels, with each pixel being 1 micron in size. Amazingly, this pixel size is about the minimum that you can achieve in visible light. The reason that the phone produced an image at all was due to the large number of sensor pixels that it could play with. The rest was the kind of statistical guesswork that is today called artificial intelligence or machine learning.
Phone photography changes our expectation of the interaction of camera hardware and image so dramatically that it is worth rethinking what photography means. I intend to explore this a bit in this series.
Push anything to extremes and flaws will begin to show up. Cell phone cameras now boast photos with around 100 million pixels, but taken with toy lenses a few millimeters across. The images that the phone gives you are the result of statistical computations based on fuzzy data. The enormous amount of computation needed for building an image (yes, images are now built, not captured) drains your phone’s battery faster than other uses would. How do you actually get to see the flaws of the optics? One way is to push the camera to extremes. Here I look at low-light photography, so that the camera’s ISO boost begins to amplify the problems that the image always has.
The featured photo used the 4.7 mm lens at the back of my phone, and used an exposure of 1/13 seconds (this means averaging over an enormous number of captures). The original image had 9248 pixels on the long side. When I compress it down to 1250 pixels for wordpress, the result is a crisp picture. Examine it at larger scales though, and flaws emerge. The detail shown in the above photo takes a segment which is 830 pixels on the long side and compresses it to 640 for wordpress. The camera chose an ISO of 15047, and there is quite a bit of noise in the detail. You can see lens flare below the arch. Above the arch you can see some of the railing blown out. The pixels are saturated and nothing you do can bring information out of them. Elsewhere, the railings are full of digital artifacts such as aliasing.
In the slideshow above you see an even more extreme case. This is a photo taken in a dark wood on a new moon night looking for owls using flashlights (yes, this was how I spent my diwali). The camera chose an ISO of 17996 and an exposure of 1/10 seconds. In the most contrasty bits of the photo you can easily see the noise in the image even without sliding into the detailed view. The lens flare in the detail looks cloudy; the AI has tried to erase it without success. It has performed surprisingly well in the face. I’m really impressed with the technique of computational super-resolution that it applies.
I close with a less extreme example from earlier in the evening. Here the software chose an ISO of 844 and an exposure of 1/25 seconds. Details are less grainy, as you can see when you zoom into the picture. The road signs are quite clear, if a little too dark to read easily, but the darker areas of the photo have clear digital artifacts, some of which you can see in the zoom. But you can see the liquor shop in its prize location at a crossroad blazing with light, open to its business of making the roads less safe to drive on.
Phone photography changes our expectation of the interaction of camera hardware and image so dramatically that it is worth rethinking what photography means. I intend to explore this a bit in this series.
The idiot-savant in your phone camera has got very good at teasing detail out of terrible images. That’s the reason that a toy lens and sensor gives you acceptable photos at all. This time around I wanted to see how it deals with mist and fog. As the featured photo shows, it does rather well with mist. There’s a wonderful layering of colour. The flowers in the foreground, the layer-cake appearance of the cliffs, and their fading into the distance is rendered well.
I didn’t have luck to run into fog, but a long vista in mist can tell you how the phone will deal with fog. As you might expect, it loses much of the detail. This is one place where human editing can actually help to bring out detail. I did the usual trick with layers, masks, and curves to get a little more detail in the middle distance than the AI gave me. I could also bring out a bit more of the colour in the foreground.
Getting better colour was a fortunate accident. Usually the AI does much better than human at getting the foreground colour. This photo is an example. It gives the impression of having obtained immense amount of detail in the foreground. When blowing up the view on my monitor, pixel for pixel, I saw that it hadn’t got more detail. It had largely played with contrast and saturation to fool the eye into thinking it had a lot of detail. In the middle distance, however, I could improve its output with minimal effort.
This picture shows the same effect better. The AI does great colours in the foreground, but loses a little in the valley and far-away cliff. I could bring a bit more out of those areas with a quick edit. With a little more care you could nudge the background into complete clarity, but why lose the beautiful effects that a light mist can give?
Here is another photo which looked wonderful just as it came out of the box. I could extract some of the detail in the background by the usual methods, but that didn’t look better. There we go again, distinguishing craft and art. That’s where this post has to stop. I don’t want to analyze the eye of the beholder.
Phone photography changes our expectation of the interaction of camera hardware and image so dramatically that it is worth rethinking what photography means. I intend to explore this a bit in this series.
Ambush photography is always on mind when I’m in a place with lots of tourists. I define that as taking photographs of people posing for photos, of paparazzi taking photos of celebrities, of photographers taking photos, or of people taking selfies. Except in the case of paparazzi, when my subjects notice my ambush it leads to a breaking of ice, and some conversation. But I digress. This series of posts is not about the art of photography, but about its craft. And the start of craft is to examine the behaviour of the tool that you use.
OnePlus Nord CE2 ProOlympus TG 6Nikon P 900OnePlus Nord CE2 ProClick on a picture for an expanded view
So I looked at this cell phone photo in my favourite editor and called up its colour histogram. The result was surprising. In every primary colour, a majority of the pixels were all white (fully exposed, blown out) or all black (dark, unexposed). The exposure of all other pixels has an equal chance of being anything in between. Contrast this to what happens in images from two regular cameras: in each colour the histogram peaks somewhere in between. This means that a majority of pixels see a general level of illumination in each colour, with lesser number of pixels straying far from the average. I checked that the odd histogram was not special to the photo by checking a few other of my cell phone photos. The two examples in the gallery above show that this general behaviour belongs to the cell phone, not the photo.
So what’s happening? Flattening the histogram is what multi-exposure HDR photos aim for: bringing out details in the shadows and controlling over-exposure. This is HDR in colour, and on steroids. The camera AI has been trained in what the average human eye sees, and edits each photo in-the-box to maximize the effect for the human eye. It has automated a lot of the detailed editing that we used to do. That’s why I find it hard to improve most phone photos with my editor. The AI has already done what I would usually do, and done it better.
That’s what automation is for. It can still miss in a few cases, and I would like the ability to start again from scratch on those. Sometimes I would also like to make things deliberately different, that’s what the art of photography is. But perhaps most people don’t care for either nicety, and a lot of the time neither do I. This automation is certainly a tool that I would like in my kit. I have several different cameras for different things anyway. A cell phone is just a versatile power tool to add. I would welcome anything that takes away a bit of the burden of the craft, and lets me concentrate on the art.
Phone photography changes our expectation of the interaction of camera hardware and image so dramatically that it is worth rethinking what photography means. I intend to explore this a bit in this series.
Rooms with large windows get more light than those with small slits for a window. Similarly cameras with small aperture lenses collect less light than those with larger apertures. Another artifact that small openings give is not usually visible with the naked eye. You may think of light as traveling in straight lines, but it is actually a wave. Where that becomes visible is at the edges of the windows: it can bend slightly around windows, making edges look fuzzy. This is called diffraction. The same thing happens in photos: edges of things become slightly fuzzy. Diffraction limits the resolution of your photo, sometimes more than pixel size. In order to keep this as clear as possible, I’ll not describe apertures by f numbers, but by the actual diameter of the part of the lens which is collecting light.
Here I compare images taken with a phone camera and a bridge camera. The phone camera used a lens aperture of 12 mm. I used that to take the street photo of a lemonade vendor in Puri. That image came as a 9248 pixels wide jpeg (all photos are in 4:3 aspect ratio) which I’ve compressed to 1250 pixels wide in the featured photo. (I think the red is too bright, but sensors have a problem with red. That’s a topic for a different post.) The bridge camera used an aperture of 62.5 mm and gave me the photo of the dragonfly as a jpeg which was 4608 pixels wide. I reduced it to 640 pixels in the view above.
Click on the photos to see them in full
Here is a zoomed in view of the two photos. In both of them I’ve selected a part of the photo 1662 pixels wide, and reduced them to 640 pixels for use here. In the photo of the dragonfly I can begin to see noise in the background; it was a very gloomy day and the photo was taken during a monsoon shower. But the edges look pretty sharp. In particular the veins in the dragonfly’s wing are quite clear. In the photo of the cart I can see that different colours are beginning to bleed into each other at the edges.
Click on the photos to see them in full
Finally, here is a zoom into a section of the originals which is 834 pixels wide. The images are reduced to 640 pixels wide for use here. I can see aliasing artifacts in the handle of the bucket: the straight line of the edge looks like a jagged lightning bolt if you look closely. There is no such artifact in the other photo. The veins on the dragonfly’s wings are still pretty sharp, but the joints between planks in the body of the wooden cart look soft. This is the diffraction limit on the resolution beginning to show. Software corrects for it, but that creates other artifacts. The bottom line? You can’t use the 64 million pixels of the phone image to zoom in a far as you can with the 16 million pixels of the bridge camera.
Phone photography changes our expectation of the interaction of camera hardware and image so dramatically that it is worth rethinking what photography means. I intend to explore this a bit in this series.
Puri, an ancient temple town, is the perfect place for street photos. No camera can be more discreet these days than a phone; bystanders can seldom tell whether you are taking a selfie or a photo of the street. Gone are the days when you saw a photographer and you would walk around them. These days you could land up photo-bombing a selfie. I walked about taking more shots than I could ever use. I had a few destinations in mind, and since these small lanes are a little confusing, I had my maps and location service on. I knew that all this can eat charge like a hungry spider. This time I was going to track exactly how much.
Normally I charge the battery fully, and it gives me a low battery alert when the charge has fallen to 15% of its capacity. On the average I have to charge my phone every three days. That means in an average hour I use 2.3% of the charge. After an hour of walking, I saw that maps and camera had each been on for the hour. Maps had eaten 3% of charge, but the camera had eaten just over 10%. This was just the camera software, since the display is counted separately. This agreed with my previous experience, that I would need to charge my camera after a day’s shooting.
To understand why, back up a little. These photos are zoomed in by a factor of about 4 to 8. With a DSLR setup you would not expect to capture the details of the old man’s mustache using a 15 mm diameter lens which has a focal length of 26 mm. The sensor size on my phone is almost 8 times smaller than that on most DSLRs, and therefore catches that much less light. The sharpness that you see comes from the number of output pixels in the image. That pixel density is due to intense computation, including two components that I’ve explored before: computational super-resolution and image averaging over a very large number of images. Driving the software that tries to compensate for the hardware limitations is what uses up so much charge. Algorithms will improve in future, mass market hardware will become better, and processors will run cooler. But until then, the carbon footprint of phone photography will remain several times larger than that of communication.
Phone photography changes our expectation of the interaction of camera hardware and image so dramatically that it is worth rethinking what photography means. I intend to explore this a bit in this series.