An upcoming iOS 13 developer beta adds the Deep Fusion camera effect for iPhone 11, iPhone 11 Pro and iPhone 11 Pro Max. The new iPhone 11 and iPhone 11 Pro feature is supposed to be available on its release day two weeks ago but it is still not available in iOS 13.1.2. The Deep Fusion camera feature processes low to medium-light images, albeit being similar to Night Mode.

A tech news website named The Verge, states the functions of the Deep Fusion feature:

1. By the time you press the shutter button, the camera has already grabbed three frames at a fast shutter speed to freeze motion in the shot. When you press the shutter, it takes three additional shots, and then one long exposure to capture detail.

2. Those three regular shots and long-exposure shot are merged into what Apple calls a “synthetic long” — this is a major difference from Smart HDR.

3. Deep Fusion picks the short exposure image with the most detail and merges it with the synthetic long exposure — unlike Smart HDR, Deep Fusion only merges these two frames, not more. These two images are also processed for noise differently than Smart HDR, in a way that’s better for Deep Fusion.

4. The images are run through four detail processing steps, pixel by pixel, each tailored to increasing amounts of detail — the sky and walls are in the lowest band, while skin, hair, fabrics, and so on are the highest level. This generates a series of weightings for how to blend the two images — taking detail from one and tone, tone, and luminance from the other.

5. The final image is generated.

Deep Fusion is a hidden iPhone camera feature, which the user can’t tell when it’s running in the Camera app. Apple has also responded to The Verge’s article that it’s part of the algorithm of the camera system. Users can tell if an image is enhanced with Deep Fusion by the photo’s quality. The images should consist of better image quality from its original.

Images captured via Deep Fusion comes with a significant amount of image quality improvements. One example is that the feature uses machine learning to shoot a photo with negative exposure values, which is a really dark image. This image is used to combine sharpness into the final product’s detail. After the initial image, 3 more standard images are taken on the iPhone’s camera, then the 4 images blend into a crisp, quality image. Two 12MP images are combined as the 24MP images but in a single 12MP HD image.

In addition, every pixel is modified to bring the best combination for the overall image. Machine learning takes a closer look at the images to make sure every frame matches to its perfection. It sorts out the frequency of how textures and colours should be enhanced, which is the blue coloured elements in the lowest frequency level, skin tones in the medium frequency levels and the more significant objects in the highest frequency level.

Categories: Apple