Gemini 3: The Good, The Bad, And The (Nano) Bananas

Gemini 3: The Good, The Bad, And The (Nano) Bananas

I’ve used Gemini 3 for a day now. I’m impressed. And yet, it feels like we’re reaching a point where the technology as it currently stands has begun a stage of refinement rather than transformation. It’s a fantastic model, but the differences between it and the competition are becoming harder to spot.

This parity highlights a crucial shift: we are moving from an era defined by raw intelligence to one defined by the harness. It is a clear demonstration that the model is only half the equation; the other half is the product ecosystem built on top of it.

Google Antigravity

The Good

The Agent Manager interface is a breath of fresh air. While the industry buzzes about leveraging multiple agents to scale output, I’ve found that the true limiting factor isn't the AI—it’s human attention. Non-trivial tasks inevitably require oversight, "hand-holding," and interaction. Consequently, user interfaces like Antigravity that streamline context switching aren’t just nice-to-have; they are invaluable. (A quick shoutout to Vibe Kanban here as well—more on that in a later post.) Additionally, Gemini’s ability to natively navigate a browser via extension is a massive help during development.

The Bad

Unfortunately, stability is a major issue. It "poops the bed" constantly and fails to pick up where it left off when connections drop, forcing me to handhold it far too often. Frankly, in its current state, it doesn’t boost my productivity. The usage limits are also absurdly low; even as an "ultra" subscriber, I can barely squeeze two hours of work out of Gemini 3 Pro. I expect these wrinkles to be ironed out over time, but right now, they are painful.

The Bananas

It natively integrates "Nano Banana" for image generation. It’s a nice feature to have, certainly, but the jury is still out on its practical utility beyond novelty.

The Shifting Landscape

Google is clearly pushing the frontier, not just in the models themselves but on the product front with tools like AI Studio and Antigravity. That said, I still believe OpenAI holds the crown when it comes to polishing and productizing their models.

What Hasn't Changed

Despite the hype, some things remain the same. Context is still king; the utility you get is heavily dependent on the quality of instructions and background context you provide. We’re still navigating the "jagged frontier." It’s almost more frustrating now when Gemini falls flat on its face—which it still does—precisely because it is so often impressive. Whether due to under-specified requirements, lacking context, flawed agent harnesses, or out-of-distribution tasks, these models are not magic. The same rules apply for coaxing good output, and it will take time to build intuition for its specific strengths and weaknesses.

Takeaways

Gemini is fantastic, but superintelligence, it isn’t.

For coding (the thing I use LLMs for most), Anthropic Sonnet 4.5 and Claude Code remain the state of the art. But I’ll lean on Gemini for the specific areas where it dominates, particularly design and creative writing. (As an aside, I’ve created skills for Claude to use OpenAI Codex and Gemini 3 through their CLI tools which has proven very effective at times.)

This article was edited by Gemini 3. We make no apologies for em dashes 😉