Do a post about Eric Kim’s view of computational photography.
My Experience With Computational Photography
I myself only recently found out what the term “computational photography” meant. I’ve been using it in my Google Pixel phone, but I didn’t know the term existed for how the camera worked. It uses computer algorithms to take photos rather than an improved lens or sensor. Software does the work.
Eric Kim has a great in-depth article about computational photography. He summarizes it all well by describing how photography is changing due to this new software and how it will be an equalizer.
I disagree with some points of the article. For one, I think he puts far to much stock into what this is. Nothing will replace human creativity in processing photos. Whether you like my processing or not, it still has a unique look. It is good enough for many situations, but nothing replaces the human decision making process, it’s only a tool that is good enough for many instances.
I can tell the difference even if nobody else can. There are small elements that I manipulated to get the look that I want. A computer cannot do that.
The problem is that HDR is just an aesthetic. Done right, it’s more than a “look”, but it’s not composition or good subject matter. Software will not get make these happen.
On the plus side, Eric Kim is right that often, the camera software is great and does amazing HDR. What takes a long time to do and requires a significant level of skill, can now be done by anyone. For me, it’s a timesaver when it’s good enough. For others, it allows them to do HDR when they don’t even know or understand what it is.
Mr. Kim points out the interesting aspects that are coming for the future of software in photography. We will be more likely to upgrade the software rather than the entire camera. The economics of this are going to open up higher end and more creative photography to many more people.