Bringing large language models like Stable Diffusion to smartphones is the next step in bringing generative tools to a wider audience.
Typically, such generative neural networks require a lot of computing power to run, and most applications that offer such services on mobile devices perform processing in the cloud. However, Qualcomm has proven that its version of Stable Diffusion can easily do this on a smartphone.
Qualcomm showed off Stable Diffusion version 1.5, which generated a 512 x 512 pixel image in 15 seconds. By comparison, generating images on a decent laptop would take a few minutes, so the phone’s result is impressive.
It’s not known which smartphone was used for the demonstration – but it is equipped with the flagship Snapdragon 8 Gen 2 chipset (released last November and the AI-focused Hexagon processor). The company’s engineers have also performed various software optimizations to ensure optimal operation of the program.
In fact, other experimenters have already tried to run Stable Diffusion on Android. Developer Ivon Huang blogged about running a generative neural network on a Sony Xperia 5 II with a Qualcomm Snapdragon 865 processor and 8GB of RAM. Although, as Huang noted, it took an hour to create a 512 x 512 image with this setting.
There is a fast break for 1 hour to create a 512×512 image on an Android phone, where it can be in seconds on a PC.
face in the generated image is like »WHY? Why did you spend time on doing this?” pic.twitter.com/hpzC88LmXU— Yvonne Huang☸️?✝️✗[email protected] (@Ivon852) February 19, 2023
Back in December, Apple released the optimizations needed to run Stable Diffusion locally on its own Core ML machine learning platform. Journalists at The Verge tried running Stable Diffusion 1.5 on iPhone 13 through Core ML-accelerated Draw Things. With this setting, it took about a minute to create a 512 x 512 image.
Therefore, in both cases, Qualcomm wins in speed, but has certain limitations – the latest hardware and an optimization package that is not publicly available.
Running large AI models on mobile devices offers many advantages over cloud computing: convenience (no mobile connection required), cost (developers won’t charge users when the server bill is due), and privacy (you don’t sending data to someone else’s computer).
Source: The Verge