Unclear what they hope to achieve but here are some pretty good guesses for cameras
During a corporate strategy session, Sony was more than happy to talk about their image sensors which are dominating many industries, including smartphones and automotive. In fact, 80% of their sales are going to those sectors and not traditional cameras which are obviously a niche product at this point, thanks to smartphones. One interesting tidbit from their presentation was their plan to bring AI to their sensors. Just last week, Sony announced plans to work with Microsoft on AI and semiconductors which fits perfectly into this reveal.
Sony was mostly mum on what exactly this means but when it comes to full-frame cameras, it’s not hard to see what the future holds with such an integration, assuming the functionality does actually come to cameras and isn’t kept for smartphones and cars. But before we get into that, we only need to look at Google and their efforts to see the benefits of AI when it comes to photos. The Pixel phones, especially the recent crop, have had some fantastic abilities like depth effect photography and Night Sight which turns dark photos into vibrant pictures. However, unlike Apple, which does depth effect photography locally and live, thanks to their hardware prowess, Google relies on you taking the photo, processing it, and applying these ‘effects’ after the fact and not live.
While less than ideal, this actually translates perfectly for Alpha cameras. Imagine being able to take all sorts of photos in various environments and alongside Sony’s already stellar sensors, having the ability in post to have some variation of AI giving you improved shots at night or better recognition of objects versus humans and animals for superior focusing. From there, anything from color to video can be improved.
Now a lot of this is a pipe dream only because a camera is so vastly different from a smartphone that’s always connected to the cloud and can tap into an AI that’s not local. In the case of a camera, would it need to be Wi-Fi enabled or have 4G (or likely 5G by then) connectivity to achieve this? Or would it be done in post via an app? Those are, I’m sure, just some of the challenges Sony is working on and could be the difference of such a marriage between AI and sensors being limited to certain product segments like smartphones and cars.
Still, the possibilities are exciting to think about and for everything that’s going wrong in mobile at Sony,
it shows the hardware strength Sony has and what it can achieve when integrated with powerful software. From Sony, during their corporate strategy session:
- We expect to leverage the superior technology Sony has developed in this business to maintain our industry-leading position going forward.
- Approximately 80% of CMOS sensor sales are to smartphones. Although this market has matured, demand for sensors continues to grow due to adoption of multiple sensors and larger sized sensors in smartphones.
- Demand for Time-of-Flight sensors in smartphones is also expected to increase.
- Although investment in greater production capacity over the next few years is necessary, CMOS sensor production capacity does not become obsolete, resulting in high return on investment in the long term.