New Google Pixel 10 & 11 Leak: AI-Powered Video Magic, Ultra Low Light Video, and a 100x Zoom Surprise

Google Pixel 9 Pro
Image Source: Google

Recent leaks from Google’s gChips division reveal exciting new software features for the upcoming Pixel 10 and Pixel 11, including major camera upgrades and AI enhancements. From significant camera upgrades to improved AI capabilities, the next generation of Pixel phones is shaping up to be a major leap forward.

[Read More: Pixel 9 Pro XL Delivers Poor Macro Photography, Gemini Said?]

AI Takes Center Stage

The Pixel 10 and Pixel 11 will focus heavily on Artificial Intelligence, reflecting Google’s continuing drive to enhance user experiences with smart and adaptive features. Leveraging the improved, these phones will introduce "Video Generative ML", a feature designed to enhance video editing capabilities directly within the Photos app. For example, users could effortlessly create professional-quality highlight reels of their vacations, with the AI automatically identifying key moments and applying suitable transitions and effects. This intuitive editing tool will use generative AI to assist users, potentially allowing for smarter and more context-aware edits, a function that could also extend to YouTube Shorts for more engaging content creation.

Other notable AI additions include "Speak-to-Tweak", a natural language-based photo editing tool that allows users to apply edits verbally, and "Sketch-to-Image", an innovative feature that will transform basic sketches into polished images. These new additions align with Google’s ongoing Gemini project, which is also exploring sketch-based creative tools.

Google is also working on a mysterious "Magic Mirror" feature, although its purpose remains unclear from the leaks. Furthermore, the Tensor G5 is set to bring the ability to run Stable Diffusion-based models directly on-device, a significant upgrade from the previous reliance on server-based processing for advanced creative projects in the Pixel Studio app.

[Read More: Machine Learning and Deep Learning]

Tensor G5: A New Level of Power and Efficiency

The Tensor G5 chip powering the Pixel 10 and 11 represents a significant advancement over its predecessor. While the Pixel 9’s Tensor G4 featured an upgraded CPU cluster, the G5 brings even more changes, including a GPU switch from Arm Mali to Imagination Technologies' DXT-48-1536, which runs at 1.1 GHz and includes ray tracing support—a first for Google chips. Additionally, GPU Virtualization support allows accelerated graphics in virtual environments, reflecting Google's commitment to improved device functionality.

The Tensor G5 retains a single Arm Cortex-X4 primary core but enhances the mid-cluster with five Cortex-A725 cores, while the little cluster now has two Cortex-A520 cores. This restructuring aims to balance performance and efficiency. The chip’s improved TPU is 14% faster in real-world performance compared to the Tensor G4 and adds new features like small embedded RISC-V cores, enabling more flexibility for developers. Built on TSMC's 3 nm-class N3E process node, the G5 is a slightly larger chip compared to Apple’s A18 Pro, suggesting more computational resources for AI-driven enhancements.

[Read More: TSMC's 2nm Breakthrough Powers the Next Wave of AI and Mobile Tech]

Camera Upgrades: 100x Zoom

As expected from Google, the cameras of the Pixel 10 and 11 are getting major upgrades. One of the headline features coming to the Pixel 11 is a 100x zoom capability, achieved through Machine Learning. Although AI zoom offers better image quality compared to traditional interpolation digital zoom, the question remains whether it can reveal more details than optical zoom. We are eager to see the limits of AI zoom and when it might surpass optical zoom. This impressive level of zoom, available for both photos and videos, will be supported by a "next-gen" telephoto camera, suggesting substantial improvements in hardware as well. Currently, the Samsung Galaxy S21 Ultra, Huawei Mate 40 Pro+, Huawei Mate X2, and Huawei P40 Pro+ offer the longest true optical zoom capabilities, reaching up to 240mm, the longest in the market today. However, these phones have a resolution of around 10MP, whereas the Oppo Find X7 Ultra, which offers a 135mm zoom, features a 50MP sensor, providing significantly higher resolution for zoomed-in images. It will be interesting to see how Google’s AI-driven zoom competes with such existing technology.

[Read More: Samsung Galaxy S24 FE Review: ProVisual Engine AI and Triple-Camera System Redefine Mobile Photography]

Cinematic Blur and Ultra Low Light Video

In addition to better zoom, Google is enhancing its popular Cinematic Blur feature, now supporting 4K video at 30fps, along with a new "video relight" function to adjust lighting conditions in recorded videos. These enhancements are powered by a new Cinematic Rendering Engine within the Tensor G5 chip, which will also make video recording with blur more power-efficient.

Arguably the most exciting development in the Pixel 11 is the introduction of "Ultra Low Light video", also known as "Night Sight video". Unlike the cloud-reliant version of this feature seen in earlier devices, this iteration will work entirely on-device. Specifically designed for lighting conditions as low as 5-10 lux, comparable to a dimly lit room or dusk-time light, this feature aims to revolutionize low-light video recording by providing far superior quality than its predecessors. The market-leading AI software for noise reduction, Topaz Denoise AI, sets a high bar, and we look forward to seeing a comparison between Google's Ultra Low Light video capabilities and existing solutions.

[Read More: Topaz Labs Releases Update to Photo AI: Better Text Restoration and More!]

Health and Ambient Features with Tensor G6

Beyond photography, Google is investing in features that improve overall user health and wellbeing. Thanks to the new "nanoTPU" embedded in the low-power segment of the Tensor G6 chip, the Pixel lineup will offer always-on ML-based features, with a heavy focus on health monitoring. These features include detection of various conditions and activities such as agonal breathing, coughing, snoring, sneezing, sleep apnea, fall detection, gait analysis, and sleep stages monitoring. Google is also integrating emergency sound detection capabilities, adding an extra layer of safety for users.

Activity tracking is also seeing improvements, particularly for runners. The new "Running ML" feature will provide advanced metrics, including "coachable pace" and "balance & oscillation" analysis, to help improve performance.

[Read More: Pixel Watch 3 Introduces Lifesaving Loss of Pulse Detection Feature]

User Sentiment: Zoom and Ultra Low Light Lead the Polls

User excitement for the upcoming Pixel features is varied, but the new zoom capabilities and Ultra Low Light video stand out as top favorites. According to a poll conducted by Android Authority, 100x zoom garnered 29% of the vote, closely followed by Ultra Low Light video at 28%. The improved 4K 60fps HDR video recording and health-related features also attracted attention, highlighting Google’s balanced approach between enhancing user creativity and focusing on wellbeing.

[Read More: Canon EOS R5 II: AI is Not Only in Smartphones!]

Source: Android Authority, Wikipedia

TheDayAfterAI News

We are your source for AI news and insights. Join us as we explore the future of AI and its impact on humanity, offering thoughtful analysis and fostering community dialogue.

https://thedayafterai.com
Previous
Previous

New AI Startup PyxMagic Transforms Home Furnishing Photography with Fast, High-Quality Solutions

Next
Next

Botto AI Artist Earns US$351,600 at Sotheby’s: A New Milestone for AI in Art