Total Pageviews

How Computational Photography Is Letting Mobile Phones Perform Like ILCs

We take a look at three photo features that are turning up in today’s smartphones and the technology behind them to see how they’re producing ILC-like images.

Marcus Wong

(Photo Credits: Raktim)
Computational photography is the capture and processing of images involving the use of digital computation on top of the regular optical processes. Simply put, instead of just recording a scene as the sensor captures it, computational photography also uses the information gathered to fill in details that have been missed out. Here are three applications of this technique that you’ll see in the latest smartphones today.

1. SEEING IN THE DARK WITH GOOGLE

The problem with trying to capture images in low light with digital sensors is that you get image noise which results in artefacts and random spots of colour in the image. Every camera suffers from this because at low light the number of photons entering the lens varies greatly.

Traditionally, we counter this by letting more light into the sensor for each exposure. Placing the camera on a tripod works, but you’ll then need your subject to hold still enough for the capture. So, Google uses multiple short exposures and the optical flow principle to calculate the optimal time for each exposure.

Other than dealing with noise, Night Sight also uses machine learning to let the camera learn how to shift the colour balance for troublesome lighting. A wide variety of scenes were captured with Pixel phones, then hand corrected for proper white balance on a colour-calibrated monitor to serve as a base database for the algorithm to draw from.

To prevent the camera from doing too good a job – making a night scene look like it was shot in the day for example – Google’s engineers introduced multiple S-curves for adjusting light levels over various regions of the image instead of applying a global adjustment, thus keeping the tone mapping accurate.

2. GOING BEYOND OPTICAL ZOOM WITH HUAWEI

Like every other smartphone on the market, Huawei’s latest Mate 20 Pro doesn’t come with a zoom lens. However, the Mate 20 Pro is able to give images with greater detail and fewer artefacts than what you’d get from simply cropping the original picture by using what Huawei calls Hybrid Zoom technology.

You see, Hybrid Zoom intelligently combines data from its multiple cameras to do what’s known as super-resolution. This works similar to Canon’s Dual Pixel AF, where sets of information from images with a slightly different point of view are combined for a better result.

By using multiple lower resolution samples, the camera is able to create a higher resolution image. The Mate 20 Pro has the advantage of having multiple lenses at different focal lengths so each one can help to fill in the image information needed at the various “zoom” levels. More information, of course, leads to a better picture with more details, and Super Resolution processing is applied to enlarge the sides of an image by three times, thus improving the resolution. In addition, compression artefacts are identified and suppressed, allowing the camera to obtain clearer details and textures than with a conventional digital zoom.

3. TRUE DEPTH WITH APPLE’S PORTRAIT MODE

The depth of Field or the bokeh effect as photographers call it, has always been a bit of a holy grail because it’s generally more pronounced when you use lenses with wider apertures. As you can imagine, these lenses also tend to be larger and more expensive, so bokeh has never been something you’d associate with mobile phone cameras until now.

Apple’s iPhone 7 Plus introduced a dual-camera system which leverages two sets of images of the same object collected from slightly different angles to create a disparity map which gave the camera depth information for everything in the frame. With the iPhone X, Apple added a True Depth camera, which uses infrared light to calculate depth.

The iPhone XR uses a single camera to do depth capture with a single camera by leveraging the Dual Pixel technique described earlier and tapping into neural net software to create a highly detailed mask around the subject. This mask analyses what part of the picture is a person and what isn’t, and preserves individual hairs and eyeglasses, so when the blurring effect is applied, it doesn’t affect your subject, making for a nice approximation of the bokeh effect.

Apple has also improved Portrait Mode so you can “change” the aperture used, by using computational photography to adjust the amount of bokeh to match what you would get if you varied the aperture setting on a physical lens.
How To Make The Most Of Your Phone Camera

Phone camera Phone cameras have had an amazing development graph since the first J-SH04 made by Sharp Corporation, and today, it is one of the most used features in a smartphone.

(Photo Credits: Raktim)

Used largely by non-professionals for their daily camera needs, many professional photographers have also produced stellar work through a phone’s simple camera. In this article, we give our readers an overview of a smartphone camera and how to use it to its fullest.

(Photo Credits: Raktim)

Learning its limitations

To use any equipment or feature to the fullest, it is imperative that one learns the limitations of the same. Since a phone camera is just a small feature of a smartphone, it obviously will not be able to do as much as a point and shoot or a DSLR in most cases. It will show signs of weakness in low-light situations since its sensor is not that powerful. The camera quality also is not that great in most smartphones. Even though most smartphones come with high megapixels, sometimes as high as 41 megapixels, the quality of the final images is not half as decent when printed on paper. This is mostly because the sensor is not up to the standards of a dedicated camera.

So, a photographer needs to understand that the images he or she shoots on a smartphone will never be printed as big and beautiful as most DSLR images. But nowadays the outlook towards photography has changed a lot and people no longer look for huge image sizes in exhibitions but are on the lookout for quality content. This is where the mobile phone’s camera takes the cake.

(Photo Credits: Raktim)

The smartphone, being a small device, is something that people always carry in their pockets, which means that if there is something that needs to be shot, one would never feel the lack of a bulky DSLR. Almost all smartphones nowadays have a camera shortcut so that one can quickly open the camera even without having to unlock the phone. In fact, some phones even have a shake function, where simply shaking your phone triggers the camera app. Other limitations of the smartphone camera are that focusing is a difficult job, as it is limited in-camera controls, making it difficult to try and get the exposure just right, the lack of viewfinder creates a distraction while framing as well. Another huge issue most photographers face is shutter lag during the night since the phone camera focusses every time in between frames. The limited focal length of the camera doesn’t make it easier for anyone either and the digital zoom destroys the quality of the image even further. Now that we have gone through the most basic issues everyone faces in smartphone photography we will discuss how to avoid or at least make an attempt to avoid such issues.

Using applications

While most DSLRs will not allow software tweaks (most companies declare that tweaking the software will make the warranty null and void), smartphones clearly are made for the applications. There are hundreds of android and windows apps that let you make changes to the in-built camera application. Applications like Camera360, Camera FV-5, Camera JB+ etc will let you tweak the ISO, shutter speed, aperture etc while taking photos. Adding or removing the depth of the field is no more an issue. Effects like fish-eye, motion blur, vignette etc can be seen in real-time while shooting.

External lenses are also available these days which you can clip on to the cell phone and use. Zoom lenses also allow for optical zooms to be done on a smartphone! More and more exhibitions are being held with very small image sizes and experiments are being done with square cropping and framing.

Smartphones come with HDR modes for low light and night photography where it takes three different exposures and merges them to create better composite images that even help getting rid of ghosting, misalignment, artefacts etc. The HDR+ mode in Google Nexus, like many other smartphone apps and features of different make and model, even reduces camera shake. Millions are being invested to perfect the camera sensor of the smartphones and more and more RAM is being made available in smartphones for faster processing and writing speeds. Clicking and sharing images have become so much easier with a smartphone as you can instantly share it via your mobile data, unlike a DSLR which does not have any data options in itself and needs to be hooked up to a smartphone, WiFi or an external device for uploads.

In conclusion, smartphones have become a powerful tool in a photographer’s arsenal these days as it is always readily accessible, easy to use and can almost do everything that a small point and shoot can if one has the know-how and the right apps or equipment to harness its true potential. It is unclear what the future holds for smartphone cameras, but for the time being, these tiny cameras have become a powerful force to reckon with.

Raktim Maitra

Previous Article :





Youtube Video Partner :


(To watch ☝ click on the video)

Channel Link: 


Channel Name: GUNLORD SUBHAM

Video descriptionPUBG Gaming Videos


This article has been Sponsored by :



(For details click on the link below 👇)


Follow us on : 

Comments

You May Also Like Reading The Following Articles :