OM OM-1 System Shows Compute Tricks Are The Future Of Mirrorless Cameras

I recently tested the OM System OM-1. This is serious kit for serious photographers, with a sturdy DSLR-style build, impressive stacked sensor and excellent lenses. It’s also a flagship camera that will set you back $2,200 / £2,000 / AU$3,300 body-only.

Yet something quite different made headlines about this new camera – its compute modes. Surely computational photography is for the smaller sensors of supposedly inferior smartphones? Not so.

The rear screen of the OM System OM-1 camera lay on a wooden table

The OM System OM-1 has a dedicated menu for “Calculation Modes”, which are like advanced versions of those you’ll find on smartphones. (Image credit: future)

Not only do “calculation modes” now command a dedicated space in the OM-1’s menu, but the handling and output of these modes is constantly improving. This got me thinking – is this a nod to the future of Micro Four Thirds or mirrorless cameras in general?

So join my train of thought as I unpack what computational photography is, how it’s applied today, and what we might expect with “proper” cameras.

What is digital photography again?

We’re all familiar with Portrait mode, which has long been the poster child for computational smartphone photography. Without it, my Google Pixel 4a would otherwise be unable to achieve its flattering portraits with blurred backgrounds.

It works by using edge detection, and sometimes depth mapping, to distinguish a subject from its background, then computationally applying a uniform blur to that environment to make your subject stand out. Although fallible, the effect of Portrait mode is magical and brings the power of a big camera to our pockets.

Two bluebells in a field

A photo of bluebells taken on my Google Pixel 4a. (Image credit: future)

If I upgraded my Pixel 4a to Google’s current crop, I’d also get Motion mode, where motion blur can be added to an image, much like what you get by adding an ND (density neutral) for long exposure landscapes. You can also work the other way round, removing unwanted motion blur. Smart stuff, getting smarter and smarter.

But while we tend to associate computational photography with smartphones, Olympus (now OM System) was actually a pioneer, expanding the application of this technology in its Micro Four Thirds cameras. And the best example of this is its new OM OM-1 system.

The OM System OM-1 camera on a wooden table

(Image credit: future)

Computational photography allows a camera to do things that would otherwise be impossible due to factors such as sensor size limitations. And while the OM System OM-1’s Micro Four Thirds sensor is much larger than what you’ll find in a smartphone like the Google Pixel 6, it still looks towards the even larger full frame.

The OM-1’s 20MP resolution pales in comparison to the full-frame Sony A7R IV’s 61MP. So, to increase its image size, the OM-1 can use its “High Res Shot” mode to combine multiple images into one for a final output of up to 80MP.

Phones use multiple lenses, software and artificial intelligence to apply computational photography, while cameras like the OM-1 typically use multi-shot modes, combining several quick images into one. Same theory, but with nuanced methods and objectives.

A light painting shot on the OM System OM-1

A light painting made with the OM System OM-1 by photographer Hannu Hutahmo. (Image credit: OM Digital Solutions)

There is also a distinction in how computational photography is applied. A smartphone automatically integrates computational photography like HDR into its camera manipulation (or at least it’s a click away), whereas in a camera, computational modes feel somewhat separate from general operation, like something you have to actively choose.

What’s consistent between smartphone technology and camera technology is that hardware improvements have slowed down and the most exciting developments are in the world of computing and the power behind it. Let’s look at a few more examples.

Olympus Peak

The OM-1 now has a dedicated menu for “computing modes”, highlighting the growing importance of this type of photography in mirrorless cameras.

The list of modes includes “High Res Shot”, Live ND, HDR, Focus Stacking and Multiple Exposure. While there aren’t any new compute modes brought to the table this time around, the handling and performance of these in-camera tricks have really improved.

The back screen of the OM System OM-1 showing its calculation modes

(Image credit: future)

Want that motion blur effect? Ditch your ND filters and try Live ND which can now go up to 6EV (or six stops). Coupled with the OM-1’s incredible image stabilization, you won’t even need a tripod to keep what’s left in the scene nice and sharp.

Disappointed that you weren’t getting details in the light and dark parts of your photo? Choose HDR to increase dynamic range up to ±2EV. You’ll even get an increase in dynamic range using the High Res Shot mode which is primarily designed to increase image size, while still getting crisp 50MP handheld shots.

A bus driving down a street

The OM-1’s Live ND mode is ideal for handheld shooting where you want to create a sense of movement without the need for a tripod. (Image credit: future)

What’s behind the improvements? Computing power. The OM-1 has a processor three times more powerful than that of the E-M1 III, as well as a stacked sensor which doubles the reading speed of the sensor. Images in computer mode are processed at least twice as fast.

Personally, I don’t mind waiting a bit for that High Res Shot image to appear on the camera screen. But what is more important to me is the flexibility at the capture stage and its impact on the final image. At present, there are still real application limitations to computational photography.

computer dreams

Mixing multiple 20MP images into one takes time at the capture stage, which means High Res Shot images are currently not possible when there is motion in your scene, due to ghosting . It can be a movement like trees swaying in the wind or a person walking.

Can I expect even faster sensor readout and a processor capable of powering multi-shot images with a capture step speed that eliminates the negative effect of ghosting? If so, resolution limits might be a thing of the past.

A man in a hat at a party

This shot was taken with the OM-1 and the bright M.Zuiko 40-150mm f/2.8 wide open, but tricks of the math might help slower lenses create similar effects. (Image credit: future)

How about the same theory applied to the OM system’s handling of HDR? Could HDR be applied automatically instead (with an opt-out), much like in a smartphone? The OM-1 is as close to a smartphone experience as you get in a DSLR-style camera, but the compute element is still optional.

And even though computational photography is applied differently in OM system cameras than it is for smartphones – for example, Focus Stacking increase depth of field for disciplines like macro photography – what about new modes?

Could a new portrait mode be applied to slower OM system lenses that are less able to blur backgrounds? Or new elements for in-camera editing that include post-capture blur effects, à la Google.

Star trails above a lighthouse

(Image credit: OM System)

How about a Night Mode section – what would be the method for clearer and crisper images after sunset? Could astro-photography modes be part of it? Automatic star is dragging anyone?

The capable OM-1 outshines its predecessors, while other existing cameras like the Nikon Z9 are even faster. But hopefully in future generation OM-System cameras we will see even greater power, applied by calculation.

If so, sensor size may well become irrelevant. The whole experience at the handling and catching stage could be transformed. My train of thought has several other stops in all directions, and I’ll leave it up to you to choose your route.