Last week I found some time to add a Photography section to the site. I take lots of photos with my mobile phone and some of them turn out to be really cool or interesting. I’ll try to put up some of these photos with a small description of why I thought the particular photo was worth sharing.
I’m really interested in improvements to the core photography experience on Android. Ever since the iPhone 4 and it’s remarkably good camera raised the bar, more and more casual photographers are moving to the mobile world. There are multiple reasons for this: it’s simply more practical to not have to carry a camera with you most of the time and on occasions when you usually would not think about carrying a camera, you have you mobile phone available to take a photo of some, unexpected, nice scene.
The problem is: to improve the average user experience (average in the sense of trying to please the most customers), manufacturers are over-simplifying things. They want every customer to “be able to take amazing photos” (original marketing phrase copyright Apple ;)). This is completely fine and understandable, a noble goal to say the least. But, the problem is when development and engineering efforts are so much focused in this direction that other, more advanced users are left out. I’m talking about the programming API-s that enable third-party developers to create more advanced camera applications for the users that want more control over their photo-taking.
The general idea is simple: ship a user-friendly camera application as part of the stock user-experience but give the users who want more the ability to install (buy) additional camera applications with advanced control. Since the camera components (sensor, camera controller) almost always have these additional features available (the higher-end devices most certainly use camera components that do) this should be a no-brainer for device/system software manufacturers. But, this is mostly not the case… One buys a Nexus 5 with a decent Sony IMX179 camera sensor only to find that he is unable to manually adjust the focus point (I am not talking about predefined autofocus presets), unable to manually adjust the ISO level and unable to manually adjust the exposure time (they give you the E-Z-mode “exposure compensation” which are some predefined hints to the underlying automatic ISO and exposure time selection mechanisms). Why can Nokia (for example) give their users this functionality and the almighty Google cannot? Is it because if you are prevented from being able to set two generations of a device to the same ISO and exposure time you cannot really see the actual improvements in camera hardware? Their automatic post-processing methods can, for example, make the overall resulting photo look brighter on the newer device and voila: “New Nexus 5 has much better low-light performance compared to Nexus 4” headlines everywhere. I’m simplifying things a bit, but you can get what I’m trying to say. There are multiple interests from their part to take this kind of approach.
But, is there light at the end of the tunnel after all? Will Google redeem itself in the future? KitKat was originally a target for a new camera API that could change the game for Android. The new version of the API will supposedly give developers more control over photo-producing processes. By being given low-level access to camera features and the ability to capture raw, unprocessed, frames, developers are able to put the multiple cores available on today’s devices to good use. Third-party camera applications that give the user full, manual control over camera parameters and use custom computational-photography techniques should be possible. But, this all again goes to waste if vendors don’t give you access to certain parameters (like the Nexus 5’s ISO control)…
I was really impressed with the control Nokia gives you on their (event lower-end) phones. Here’s hoping that the new camera API really changes the status quo on Android and pushes it to the top of respectable mobile photography charts.