Gemini takes the wheel — literally
Google’s new task automation in Gemini is now able to open and navigate apps on a phone to complete multi-step tasks like ordering food or booking rides. The Verge’s hands-on testing on the Pixel 10 Pro and Galaxy S26 Ultra shows an assistant that actually gets things done rather than just suggesting actions. That in itself is a meaningful milestone: a functioning, end-to-end AI workflow running on a consumer handset.
Right now the feature is limited to a small set of partners (a few food delivery and rideshare services) and remains in beta. The experience can be slow and clunky — tasks may take longer than doing them manually and occasional hiccups occur. But even with those limits, the assistant successfully navigates app interfaces, fills in details, and completes transactions, which is impressive for an early release.
Why this matters:
- It demonstrates practical automation of real-world app tasks rather than theoretical demos.
- Users could eventually offload time-consuming, repetitive flows (ordering, booking, scheduling) to the assistant.
- Progress here will push improvements in reliability, speed, and broader app integrations.
The current rollout is a clear first step: functionality is narrow, and Google will need to expand partners and polish responsiveness. Still, this is a tangible preview of assistants that don’t just advise — they act on our behalf. For anyone excited about AI making everyday life easier, Gemini’s task automation is an encouraging and concrete glimpse of what’s coming.