Connect with us

Engines & LLMs

Google Steals AI Lead

Abby K.

Published

on

In the fast-moving world of AI language models (LLMs), the top players are shifting. Earlier, OpenAI and Meta were in control, but recent news shows Google’s latest AI is now getting the most attention.

Google was actually a bit behind, even though their own ideas led to the main tech used in today’s powerful AI. They even had a rough start with their Bard chatbot.

But lately, Google has put out strong new LLMs, and Meta and OpenAI have had some issues, which is changing the game.

Meta’s Llama 4 Launch Had Issues

Meta recently surprised everyone by releasing its new open-source LLM, Llama 4. The Saturday timing was unexpected for many.

Llama 4 can work with different kinds of information like pictures and sounds. It comes in a few sizes, including one called Scout that can remember a huge amount of text (up to 10 million tokens). This big memory helps it work with large documents or long conversations.

However, people weren’t entirely happy. Critics found out that the version of Llama 4 Meta used to get high scores on a popular ranking website (LMArena) was specially tweaked and not the standard one given to everyone. Meta also got criticized for saying Llama 4 Scout was great with its large memory, even though tests showed it wasn’t as good as other models at using very long texts. Plus, they didn’t release a model focused on complex “thinking” tasks right away.

Experts felt Meta might have rushed the announcement just to show they had a new model, even if it wasn’t fully ready.

OpenAI’s GPT-4.5 Was Too Expensive

OpenAI has also had setbacks. Their GPT-4.5 model was launched as their “biggest and best” for chat and did well in performance tests.

But the main problem was the price. Using GPT-4.5 through their developer tool (API) cost a massive $150 for every million output tokens. This was 15 times more than their GPT-4o model.

An AI expert mentioned that running such a huge model is hard with current technology and difficult to offer widely.

So, OpenAI announced they would stop offering GPT-4.5 through the API after less than three months. People can still use it through the ChatGPT website interface.

At the same time, OpenAI released GPT-4.1, a cheaper model ($8 per million tokens) that performs slightly less well overall but is better at some coding tasks. They also introduced new, more expensive models designed for reasoning.

Google Rises as Others Struggle

Llama 4 and ChatGPT-4.5 not being perfect created a chance for others, and Google’s new models jumped on it.

Some newer open-source models from Google (like Gemma) and other companies are now seen as better options than Llama 4 on leaderboards. They are strong, affordable to use, and some can even run on typical home computers.

But Google’s top-tier model, Gemini 2.5 Pro, made the biggest impression.

Launched in March, Gemini 2.5 Pro is built to “think” step-by-step. It understands different types of data, has a one-million-token memory, and is good at complex research.

Gemini 2.5 quickly moved up in rankings, winning some tests and now being the top model on LMArena. Google models generally hold many of the top spots there now.

Besides being powerful, Google is also competitive on price. Gemini 2.5 is free through their own apps and website, and using its API is reasonably priced. An even faster, cheaper version called Gemini 2.0 Flash is available for just 40 cents per million tokens.

Industry experts are noticing the change. One expert noted they use Google Gemini or other open models for complex thinking tasks because of OpenAI’s higher costs.

While Meta and OpenAI are still major players (ChatGPT has a billion users!), Gemini’s strong performance and good pricing show that the AI model race is definitely heating up, and Google is currently in a strong position.

Our Take

Okay, this feels like a reality check in the AI world! For a while, it seemed like OpenAI and Meta were miles ahead, but it turns out even the big players have bumps in the road (and really high prices!).

It’s pretty wild that Meta might have used a special version just for better scores – that’s like athletes using hidden performance enhancers! And OpenAI charging $150 per million tokens? Ouch! No wonder developers pushed back.

Google seems to be playing smart now, offering powerful models that are actually affordable or free. That feels like a winning strategy for getting people to use their AI more. It’s good to see real competition driving things forward, hopefully leading to better and cheaper AI for everyone down the line.

This story was originally featured on IEEE Spectrum.

Hey there! I’m Abby, the proud editor steering the ship at Prompting Fate. I kicked off my word-slinging journey three years ago, writing for sites and vibing with readers like you. Now, I’m all about AI breakthroughs, coding hacks, and lifestyle twists. When I’m not geeking out, I’m chilling with my purr-fect kitties (no shade please!) or chasing the ultimate taco spot.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Engines & LLMs

Google Leak: New Gemini AI Subscription Tiers Revealed!

Abby K.

Published

on

By

A recent leak has spilled the beans on Google’s upcoming plans for Gemini, its flagship AI model. It looks like Google is preparing to roll out different subscription tiers, offering users varying levels of access and capabilities. What does this mean for the future of AI access and affordability?

The leaked information suggests that Google will offer a free tier, likely with limited features and processing power, as well as several paid tiers with increasing capabilities and priority access to Gemini’s most advanced features. This tiered approach aims to cater to a wide range of users, from casual users to professional developers.

Subscription Tiers: What We Know

While the exact details are still under wraps, here’s what the leaked information suggests about the potential subscription tiers:

  • Free Tier: Basic access to Gemini, likely with usage limits and slower processing speeds.
  • Standard Tier: Increased usage limits, faster processing, and access to more features.
  • Premium Tier: Priority access to the most advanced Gemini features, dedicated support, and potentially exclusive tools.
  • Enterprise Tier: Custom solutions, large-scale deployments, and dedicated account management for businesses.

Why a Tiered Approach?

Google’s decision to offer tiered subscriptions is likely driven by several factors:

  • Revenue Generation: Monetizing Gemini to offset the significant costs of developing and maintaining the AI model.
  • Resource Management: Allocating resources based on user needs and preventing overload on the system.
  • Market Segmentation: Catering to a diverse range of users with varying needs and budgets.

The Implications for Users

The tiered subscription model could have significant implications for users:

  • Accessibility: The free tier will provide basic access to AI for everyone, regardless of their budget.
  • Value for Money: Users will need to carefully consider which tier offers the best value for their specific needs.
  • Competitive Landscape: Google’s pricing strategy could influence how other AI providers structure their offerings.

The Future of AI Pricing

Google’s tiered subscription model for Gemini could be a sign of things to come in the AI industry. As AI models become more powerful and ubiquitous, providers will need to find sustainable ways to monetize their technology while ensuring accessibility for all users.

Our Take

Okay, so Google’s going the subscription route with Gemini – color me not surprised. The real question is, how much will the good stuff cost? A tiered model makes sense, but Google’s got to nail the pricing sweet spot. If the free tier is too limited, or the premium tier is too expensive, it could backfire. This could signal a sea-change in how AI is provided to us, though – so keep a very close eye on this!

This story was originally featured on Forbes.

Continue Reading

Engines & LLMs

Grok Gets a Voice: Is It the Future of AI Assistants?

Kelly D.

Published

on

Elon Musk’s xAI has just given its Grok AI chatbot a voice, stepping into the increasingly crowded ring of voice-enabled AI assistants. Now, you can chat with Grok like you would with Siri or Alexa, adding a new layer of interaction to the platform.

This update brings Grok closer to becoming a truly hands-free assistant, allowing users to ask questions, get information, and even generate creative content without typing a single word. But how does it stack up against the competition?

Grok Joins the Voice Revolution

The voice feature is rolling out to premium X (formerly Twitter) subscribers, giving them early access to this new way of interacting with the AI. To use it, you’ll need to be a premium subscriber and have the latest version of the X app. Then, simply tap the voice icon and start talking.

According to xAI, the voice mode is designed to be “conversational and engaging,” offering a more natural and intuitive way to interact with the AI. It’s not just about asking questions and getting answers; it’s about having a back-and-forth conversation with a digital companion.

What Can You Do with Voice-Enabled Grok?

The possibilities are vast, but here are a few examples:

  • Hands-Free Information: Get news updates, weather reports, or quick facts without lifting a finger.
  • Creative Brainstorming: Bounce ideas off Grok and get real-time feedback.
  • On-the-Go Assistance: Ask for directions, set reminders, or manage your to-do list while you’re on the move.
  • Entertainment and Chat: Have a casual conversation with Grok about your favorite topics.

The Competition: A Crowded Field

Grok is entering a market already dominated by established players like Siri, Alexa, and Google Assistant. These platforms have years of experience and vast ecosystems of connected devices. To succeed, Grok will need to offer something unique and compelling.

One potential advantage is Grok’s integration with X, giving it access to real-time information and social trends. Another is Elon Musk’s vision for Grok as a more irreverent and opinionated AI, which could appeal to users looking for a different kind of digital assistant.

Is Grok’s Voice the Future?

Whether Grok’s voice mode will be a game-changer remains to be seen. It will depend on factors like the quality of the voice recognition, the naturalness of the conversations, and the overall usefulness of the assistant. However, it’s clear that voice is becoming an increasingly important part of the AI landscape, and Grok is positioning itself to be a key player.

Our Take

Okay, Grok getting voice capabilities is a major move, not just a minor feature bump! The competition to create the ultimate AI voice assistant is fierce, and the potential rewards are massive. The company who cracks this nut will immediately create a significant advantage.

Honestly, having tested Grok voice already, I can say that it is very impressive! This one is worth watching closely to see what happens next.

This story was originally featured on Lifehacker.

Continue Reading

Engines & LLMs

Could AI Pick the Next Pope? Tech Struggles with Vatican’s Secrets

Abby K.

Published

on

By

The selection of the next Pope is one of the most closely guarded and tradition-steeped processes in the world. Could artificial intelligence, with its ability to analyze vast datasets and identify patterns, crack the code and predict the outcome of the next papal conclave? The answer, it turns out, is more complicated than a simple yes or no.

Recent experiments pitting AI models like ChatGPT, Elon Musk’s Grok, and Google’s Gemini against the Vatican riddle reveal a surprising weakness: these powerful tools struggle with the nuances, historical context, and deeply human factors that influence the selection of a new pontiff. While AI can process information about potential candidates, their backgrounds, and theological positions, it falters when faced with the intangible elements that often sway the College of Cardinals.

The Challenge of Predicting the Unpredictable

Predicting the next Pope is far from a purely data-driven exercise. It involves navigating a complex web of:

  • Theological Debates: Shifting currents within the Catholic Church and differing interpretations of doctrine.
  • Geopolitical Considerations: The desire for a Pope who can effectively address global challenges and represent diverse regions.
  • Personal Relationships and Alliances: The intricate network of connections among the Cardinals themselves.
  • Divine Intervention (according to some): The belief that the Holy Spirit guides the selection process.

These factors, often subjective and difficult to quantify, present a significant hurdle for AI algorithms.

AI’s Limitations: Missing the Human Element

While AI can analyze biographical data, track voting patterns (from past conclaves), and identify potential frontrunners, it lacks the capacity to understand:

  • The “X Factor”: The charismatic qualities and spiritual depth that can resonate with the Cardinals.
  • Behind-the-Scenes Negotiations: The private discussions and compromises that shape the outcome.
  • The Mood of the Moment: The prevailing sentiment among the Cardinals at the time of the conclave.

As one Vatican insider noted, “The election of a Pope is not a rational process. It’s a deeply spiritual and human one.”

What AI Can Offer: A Starting Point for Analysis

Despite its limitations, AI can still play a role in understanding the papal selection process. It can:

  • Identify Potential Candidates: Based on factors like age, experience, and theological views.
  • Analyze Trends and Patterns: Revealing potential shifts in the Church’s priorities.
  • Provide Contextual Information: Offering background on the challenges facing the Catholic Church.

However, it’s crucial to remember that AI’s insights are merely a starting point, not a definitive prediction.

The Verdict: AI as a Tool, Not a Prophet

While AI can offer valuable insights into the dynamics of the Catholic Church and the profiles of potential papal candidates, it cannot replace the human judgment and spiritual discernment that ultimately determine the selection of the next Pope. The Vatican’s secrets, for now, remain safe from the prying eyes of artificial intelligence.

Our Take

This article highlights the crucial limitations of AI in understanding complex human systems. While AI excels at processing data and identifying patterns, it struggles with the intangible factors that drive human behavior and decision-making, especially in a context as steeped in tradition and spirituality as the papal conclave.

The fact that leading AI models falter when faced with the Vatican riddle underscores the importance of critical thinking and human expertise. AI can be a valuable tool for analysis, but it should never be mistaken for a crystal ball. In a world increasingly reliant on algorithms, it’s a reminder that some things remain beyond the reach of artificial intelligence.

It raises an interesting question – does this make certain jobs and decision making processes safe from replacement by AI, and if so, what are the key criteria? Deep rooted human relationships and a solid, yet adaptable moral compass seems to be key!

This story was originally featured on South China Morning Post.

Continue Reading

Trending