Walking near the Periyar river

Periyar river, the lifeline of Kerala. It was a name that fascinated me. A simple name, meaning big. That’s all that the people around it need to know. But the river rises in the biodiverse Western Ghats, and in the short 244 Kms from its source to its mouth in the Arabian sea it traverses a wide range of altitudes. So, almost exactly five years ago we took a short trip to the Periyar National Park. We landed at the Kochi airport and took a bus to our destination. The road passes through the intensely urbanized plains. But then, as we crossed a bridge over the river, the urban clutter fell off. We’d reached our homestay, a small two-storeyed house near the entrance to the park.

We dropped our bags and headed out for a walk. There is always a lot to see just outside a national park. We walked back to the bridge we’d crossed. Power lines ran next to it and we were sure to find kingfishers and bee eaters perched there, at eye level. I had my big lens with me, but I’ll show here only those photos I took with the fixed lens of my cell phone. The river branched crazily here, as it reached the plains. A boat was tied next to a little side stream that we crossed. A group of langurs chattered madly as they ate leaves in the canopy of trees around the path.

The phone was also good for close ups. Here in the undergrowth is one of the numerous species that you could call a daisy. I love their complex flowers, five white ray florets and numerous five-petalled yellow florets in the disk. The arrangement of the disk florets and their shape should be a very good guide to a more precise identification, but I’m intimidated by the size of the family Asteraceae, the asters. Full identification is a finicky and time-consuming job.

Which trees grow here? The answer is plain when you look around you. But it is equally plain when you look down at the small landscape around your feet. A large leaf from a teak tree was flaking into pieces as it dried. I pointed my phone at it. Bamboo too, as you can see. And the small leaves of, what was it, jamun? Quite a variety. It would be hard to keep the jamun from being eaten by birds and langurs. But then those trees fruit so abundantly that you can always get enough. We reached the bridge, and then it was time for the big zoom and the end of my fixed-lens adventure.

10 phone photos from Year 403

Was 403 ME (Modern Era) a step back towards the normal? Certainly, travel for work has come back into our lives. We got to travel a little on short breaks from work, so in that sense too it was getting back to normal. But things were still a little different. I’d never really thought of the interior of a water bottle before. I’ve been at teas with more substance, and worked at tables which were less dusty. On the other hand, gems hide in the dust sometimes. You can now see people react differently to stress.

High tea?
You need to dust your table more often
A moth I’ve never seen before
Moonscape Ladakh
You can deal with a traffic jam
The sound of a monsoon stream
An architect’s hut

I used my phone a lot more this year. The good thing about a phone is that it is always with you: for example, when you see a new species take over a niche vacated by those which are locally extinct. There are processes in nature which adapt around the disturbances we create. And I had the chance to stay in a beautiful structure which adapts to the world. Perhaps you’ll find that the oddest thing about this bunch of 10 photos is that I’ve counted in octal.

Midweek Mobile 17

In the last sixteen weeks I’ve said more or less everything that I can say at this time about taking photos with a phone. So I will end this series today with an upbeat message. The featured photo of a Yellow-tailed tussock moth (Somena scintillans). The light was good, and the camera has managed to put together a lovely photo of the moth. This is not very easy, since it sits with its wings folded into a high peak. To get a photo like this with another camera, I would have to do a bit of focus stacking. With its multiple fixed lenses, the phone has done that, and given me an image which is as sharp at the peak of its wings as it is at its hairy legs. The colour is also rendered beautifully.

What the phone did here in good light is possibly the future. I hope that in the next few years large sensors become cheaper. If that happens, then I’m sure in a few years we will get used to taking photos like this under all light conditions with the little multipurpose box and tracking device in our pockets. I would like to end this series of posts with that hope for the future.

Phone photography changes our expectation of the interaction of camera hardware and image so dramatically that it is worth rethinking what photography means. I intend to explore this a bit in this series.

Midweek Mobile 16

Sunlight on pines and grassland, mountains behind. It was a lovely scene which I captured with my phone. The phone used a lens which is 4.7 mm wide and has a fixed aperture f/1.7. It reported an exposure of 1/1043 seconds and an ISO of 100. The sensor on my phone has 4608 x 3456 pixels. This is an aspect ratio of 4:3, which I’ll retain in all the experiments I show here. The original jpeg image the phone gave me had 9248 x 6936 pixels. I compressed the image down to 1250 x 938 pixels in the header photo. It looks rather nice on my phone, and also on my laptop screen. The image has areas of bright illumination and areas of pretty heavy shadow. It also has some sharp colour contrasts. I was interested in how well the images look when I zoom in so that one pixel in the photo shows up as a single pixel on my laptop’s display.

Here is a zoom into brightly lit pine needles. I took a section which had 832 x 624 pixels and reduced it 640 x 480 pixels to show you in this post. All the following images also do exactly the same. You can see lots of digital artifacts. The most noticeable is aliasing: smooth lines and curves appearing jagged in the image. The software has teased out a lot of detail both in shadow and in full light, but the jaggedness makes it look somewhat artificial.

Next I zoomed into a portion of the dappled shadows. This is where your own eyes will play tricks. The camera captured almost nothing in the deepest shadow and the brightest light, but it does quite a good job even in the lighter shadows, apart from the aliasing problem. The best parts of this zoom are the portions where there is a strong contrast of illumination: bright details against dark background. But where a dark portion is seen against a bright background you see strange curves and squiggles. This is due to aliasing.

This zoom shows you a situation where the contrast is in colours, not so much in the level of illumination. Both the sky and the leaves are bright. I’m surprised by the amount of digital noise in the sky, in spite of the ISO being 100. Apart from the aliasing problems, I’m surprised by how soft the pine needles look. This is caused by a problem I’d written about earlier. The image is created by adding together a very large number of separate exposures (a technique called adaptive multiframe image averaging), and the breeze at that height causes the pine needles to move. The softness is due to the motion between different exposures. This is not a problem that a DSLR has; not does it ever have this digital noise in the sky.

I also found incredibly bad digital artifacts in a portion of the photo which looked pretty easy to take. The squiggles in the far slopes are due to aliasing. The strange halo around the shadows is another weird algorithmic effect. The light on the branches is like little bits of paint dabbed on by a bad painter trying to emulate the impressionists. The pine needles are just masses of colour. This zoom makes me think I should never again look at a phone photo blown up to see it pixel for pixel.

If I want sharp details, I should use a DSLR. A phone is what I would use if I wanted a quick snapshot which I would look at only on a little screen which fits in my palm. Conversely, if you want to see the defects in the phone photo, look at these examples on a big screen, not a phone.

Phone photography changes our expectation of the interaction of camera hardware and image so dramatically that it is worth rethinking what photography means. I intend to explore this a bit in this series.

Midweek mobile 15

Modern phone cameras get sharp and bright images with awful lenses and jokes of sensors. The most important aspect of the images is that they are usually viewed on the small screen of a phone. A quick search led me to an estimate that people take 4.7 billion photos every day. Be suspicious of such facile estimates. But it is clear that far less than a percent of a percent would be viewed on a large screen, where defects can show.

I stress tested my phone camera in exactly this way. My phone has a sensor with 4608 x 3456 pixels. I reduced it to 1667 x 1250 pixels for the leader photo. That looks good. But I asked what it I looked at it pixel for pixel: one pixel of the sensor for every pixel on the screen (1:1). I did that in the most detailed photo in the slideshow above. The next one compressed 4 pixels of the photo into one on the display (4:1), the next 16 pixels of the photo for one on the display (16:1), and the next (the featured photo) is shown 32 pixels per pixel of display (32:1). But for the post I compressed these views a little more; the closest is at 9:1, the rest are 36:1, 144:1 and 288:1. The result begins to show digital artifacts in the 9:1 view, although they are not overwhelming (at 1:1 they are unmistakable). Of course, I can’t predict what screen you’ll see them on, but if you have a choice looking at them on the biggest screen you have would be interesting.

On a whim I took a photo of a beetle and gave it the same treatment. Here you see the views in the ratios 9:1 (nine pixels of the photo to one of the display), with the successive frames showing 36:1, 144:1 and 288:1 compressions. It is only the last which looks sharp. On my phone the display is even smaller, so the image looks much sharper. But why this big difference between flora and fauna? I compared the exposure first. The flowers are taken with an equivalent exposure of 1/100 seconds and ISO of 100; the beetle with 1/50 seconds and ISO of 223. This means that the number of frames which are superimposed to give the final image is twice as many in the second. Slight hand movements could create the effect that you see, but the phone must compensate for that. But the ISO is also a factor; you can see more “grain” in the image of the beetle. I think another important factor must be the contrast between the object and background. That’s much smaller in the second photo. I’ll try to explore this further.

If you want a moral, I would say “Don’t look a gift horse in the mouth.” Your phone does not replace a good DSLR in image quality. Be happy with what it shows on its small display.

Phone photography changes our expectation of the interaction of camera hardware and image so dramatically that it is worth rethinking what photography means. I intend to explore this a bit in this series.

Midweek Mobile 14

From darkness to light; I tried to take my phone camera to another extreme when we stopped at a cafe for our elevenses. The phone sensor has 4608 x 3456 pixels. The software multiplies each sensor pixel into 4 image pixels to give me 9248 x 6936 pixels in the image. I reduced it to 1667 x 1250 pixels for the leader photo. The sunlight streaming in through the window, refracted in the glass of the bottles and through the water, or reflected on the coffee would give the camera quite a spin I was sure. The leader looks very good; the resident AI is designed to build images which look good compressed down.

If you want to look behind the screen all you need to do is to look at higher magnification. Here I’ve taken two parts of the photo and examined them at two scales with an aspect ratio of 4:3 in landscape orientation. The lesser zooms show an area which is 3100 pixels across (reduced to 640 for this post). The more extreme zooms show an area which is 780 pixels across (reduced to 640 for this post). You can see lots of digital artifacts: aliasing appears in oblique lines, there is a lot of digital noise in the shadows in the coffee cup (even though the ISO is only 105), but most of all, the really bright areas are not only blown out, but also very irregular. Look at the caustic lines on the surface of the coffee in the image, and then hold a coffee cup up to light. You’ll see that the bright reflections never look so irregular in real life.

All these artifacts are due to the fact that the lens is about 3 mm wide and the sensor has pixels which are just a micron across. They never manage to gather as much light as a good camera does. A lot of computation is used to compensate for that. It looks good when viewed on a phone, but when you look at it pixel for pixel it is full of flaws. I think I’ve concluded that if you are a serious photographer looking for a phone camera, then do not look at distractions like how many megapixels an image has. Look instead at the specs and buy a camera with as large a lens as possible and a sensor which is big. Measure these sizes in mm or inches, not in pixels or f numbers.

Phone photography changes our expectation of the interaction of camera hardware and image so dramatically that it is worth rethinking what photography means. I intend to explore this a bit in this series.

Midweek Mobile 13

More lessons can be learnt from the experiment I reported last week: push the performance of my phone camera to an extreme by doing very low-light photography. The camera spews out 64 Megapixel images (9248 x 6936 pixels for each photo). I took a segment which was 3300 pixels on the long side of a 4:3 aspect ratio and reduced it to a 1250 x 938 size for use in this blog. (All photos here use this aspect ratio without further comment, and I quote only the pixels on the long side.) That’s the featured image. We were looking for owls in a dark woodland using a flashlight on a new moon day, and the only lighting on the subject was its reflection from leaves. Not a bad photo given that, you can see the photographer, his shirt, the camera, and his hat. The amazing thing about the photo is its ISO of 17996! That’s the only way that the phone has of getting an image using a 1/10 s exposure with a lens that’s less than 5 mm across.

The photos that you see above comes from zooms to 830 pixel wide areas, subsequently reduced to 640 pixels across for use in the blog. The lighter image is taken from near the collar and arm of the shirt in the featured photo, and the darker shows the barrel of the camera. I’m not surprised by the lack of detail, the colour aberrations, and the enormous amount of digital noise in the photo. There was hardly any light at all to begin with. How did the camera actually manage to get anything useful with that incredible ISO?

Part of the answer is the Sony IMX471 CMOS sensor that’s used by my phone. The sensor has 4608 x 3456 pixels, with each pixel being 1 micron in size. Amazingly, this pixel size is about the minimum that you can achieve in visible light. The reason that the phone produced an image at all was due to the large number of sensor pixels that it could play with. The rest was the kind of statistical guesswork that is today called artificial intelligence or machine learning.

Phone photography changes our expectation of the interaction of camera hardware and image so dramatically that it is worth rethinking what photography means. I intend to explore this a bit in this series.

Midweek mobile 12

Push anything to extremes and flaws will begin to show up. Cell phone cameras now boast photos with around 100 million pixels, but taken with toy lenses a few millimeters across. The images that the phone gives you are the result of statistical computations based on fuzzy data. The enormous amount of computation needed for building an image (yes, images are now built, not captured) drains your phone’s battery faster than other uses would. How do you actually get to see the flaws of the optics? One way is to push the camera to extremes. Here I look at low-light photography, so that the camera’s ISO boost begins to amplify the problems that the image always has.

The featured photo used the 4.7 mm lens at the back of my phone, and used an exposure of 1/13 seconds (this means averaging over an enormous number of captures). The original image had 9248 pixels on the long side. When I compress it down to 1250 pixels for wordpress, the result is a crisp picture. Examine it at larger scales though, and flaws emerge. The detail shown in the above photo takes a segment which is 830 pixels on the long side and compresses it to 640 for wordpress. The camera chose an ISO of 15047, and there is quite a bit of noise in the detail. You can see lens flare below the arch. Above the arch you can see some of the railing blown out. The pixels are saturated and nothing you do can bring information out of them. Elsewhere, the railings are full of digital artifacts such as aliasing.

In the slideshow above you see an even more extreme case. This is a photo taken in a dark wood on a new moon night looking for owls using flashlights (yes, this was how I spent my diwali). The camera chose an ISO of 17996 and an exposure of 1/10 seconds. In the most contrasty bits of the photo you can easily see the noise in the image even without sliding into the detailed view. The lens flare in the detail looks cloudy; the AI has tried to erase it without success. It has performed surprisingly well in the face. I’m really impressed with the technique of computational super-resolution that it applies.

I close with a less extreme example from earlier in the evening. Here the software chose an ISO of 844 and an exposure of 1/25 seconds. Details are less grainy, as you can see when you zoom into the picture. The road signs are quite clear, if a little too dark to read easily, but the darker areas of the photo have clear digital artifacts, some of which you can see in the zoom. But you can see the liquor shop in its prize location at a crossroad blazing with light, open to its business of making the roads less safe to drive on.

Phone photography changes our expectation of the interaction of camera hardware and image so dramatically that it is worth rethinking what photography means. I intend to explore this a bit in this series.

Midweek Mobile 11

The idiot-savant in your phone camera has got very good at teasing detail out of terrible images. That’s the reason that a toy lens and sensor gives you acceptable photos at all. This time around I wanted to see how it deals with mist and fog. As the featured photo shows, it does rather well with mist. There’s a wonderful layering of colour. The flowers in the foreground, the layer-cake appearance of the cliffs, and their fading into the distance is rendered well.

I didn’t have luck to run into fog, but a long vista in mist can tell you how the phone will deal with fog. As you might expect, it loses much of the detail. This is one place where human editing can actually help to bring out detail. I did the usual trick with layers, masks, and curves to get a little more detail in the middle distance than the AI gave me. I could also bring out a bit more of the colour in the foreground.

Getting better colour was a fortunate accident. Usually the AI does much better than human at getting the foreground colour. This photo is an example. It gives the impression of having obtained immense amount of detail in the foreground. When blowing up the view on my monitor, pixel for pixel, I saw that it hadn’t got more detail. It had largely played with contrast and saturation to fool the eye into thinking it had a lot of detail. In the middle distance, however, I could improve its output with minimal effort.

This picture shows the same effect better. The AI does great colours in the foreground, but loses a little in the valley and far-away cliff. I could bring a bit more out of those areas with a quick edit. With a little more care you could nudge the background into complete clarity, but why lose the beautiful effects that a light mist can give?

Here is another photo which looked wonderful just as it came out of the box. I could extract some of the detail in the background by the usual methods, but that didn’t look better. There we go again, distinguishing craft and art. That’s where this post has to stop. I don’t want to analyze the eye of the beholder.

Phone photography changes our expectation of the interaction of camera hardware and image so dramatically that it is worth rethinking what photography means. I intend to explore this a bit in this series.

Midweek Mobile 10

Ambush photography is always on mind when I’m in a place with lots of tourists. I define that as taking photographs of people posing for photos, of paparazzi taking photos of celebrities, of photographers taking photos, or of people taking selfies. Except in the case of paparazzi, when my subjects notice my ambush it leads to a breaking of ice, and some conversation. But I digress. This series of posts is not about the art of photography, but about its craft. And the start of craft is to examine the behaviour of the tool that you use.

So I looked at this cell phone photo in my favourite editor and called up its colour histogram. The result was surprising. In every primary colour, a majority of the pixels were all white (fully exposed, blown out) or all black (dark, unexposed). The exposure of all other pixels has an equal chance of being anything in between. Contrast this to what happens in images from two regular cameras: in each colour the histogram peaks somewhere in between. This means that a majority of pixels see a general level of illumination in each colour, with lesser number of pixels straying far from the average. I checked that the odd histogram was not special to the photo by checking a few other of my cell phone photos. The two examples in the gallery above show that this general behaviour belongs to the cell phone, not the photo.

So what’s happening? Flattening the histogram is what multi-exposure HDR photos aim for: bringing out details in the shadows and controlling over-exposure. This is HDR in colour, and on steroids. The camera AI has been trained in what the average human eye sees, and edits each photo in-the-box to maximize the effect for the human eye. It has automated a lot of the detailed editing that we used to do. That’s why I find it hard to improve most phone photos with my editor. The AI has already done what I would usually do, and done it better.

That’s what automation is for. It can still miss in a few cases, and I would like the ability to start again from scratch on those. Sometimes I would also like to make things deliberately different, that’s what the art of photography is. But perhaps most people don’t care for either nicety, and a lot of the time neither do I. This automation is certainly a tool that I would like in my kit. I have several different cameras for different things anyway. A cell phone is just a versatile power tool to add. I would welcome anything that takes away a bit of the burden of the craft, and lets me concentrate on the art.

Phone photography changes our expectation of the interaction of camera hardware and image so dramatically that it is worth rethinking what photography means. I intend to explore this a bit in this series.