Engines & LLMs
Microsoft Researchers Squeeze AI onto CPUs with Tiny 1-bit Model

In a significant step towards running powerful AI locally, Microsoft researchers have developed an incredibly efficient 1-bit large language model (LLM). Dubbed BitNet b1.58, this 2-billion parameter model is reportedly lightweight enough to run effectively on standard CPUs, potentially even on chips like the Apple M2, without needing specialized GPUs or NPUs.
The key innovation lies in its “1-bit” architecture. While technically using 1.58 bits to represent three values (-1, 0, +1), this is drastically smaller than the typical 16-bit or 32-bit formats used in most LLMs. This massive reduction in data size dramatically cuts down on memory requirements and computational power needed for inference.
Published as open-source on Hugging Face, BitNet b1.58 was trained on a hefty 4 trillion tokens. While smaller models often sacrifice accuracy, Microsoft claims this BitNet variant holds its own against comparable, larger models like Meta’s LLaMa and Google’s Gemma in several benchmarks, even topping a few. Crucially, it requires only around 400MB of memory (excluding embeddings) – a fraction of what similar-sized models need.
To achieve these efficiency gains, the model must be run using Microsoft’s custom bitnet.cpp inference framework, available on GitHub. Standard frameworks won’t deliver the same performance benefits.
This research tackles the high energy consumption and hardware demands often associated with AI. Developing models that can run efficiently on everyday hardware like CPUs could democratize AI access, reduce reliance on large data centers, and bring advanced AI capabilities to a wider range of devices.
Our Take
Okay, a 1-bit (ish) AI model from Microsoft that can run on a regular CPU? That’s pretty cool. It tackles one of the biggest AI hurdles: the need for beefy, power-hungry hardware. Making AI this lightweight could seriously shake things up.
Imagine capable AI running locally on phones or laptops without killing the battery or needing an expensive GPU. While there’s usually a trade-off between size and smarts, Microsoft seems to be closing that gap here. This kind of efficiency focus is exactly what we need to make powerful AI more accessible and maybe even a bit more sustainable.
This story was originally featured on Tom’s Hardware.
Engines & LLMs
Google Leak: New Gemini AI Subscription Tiers Revealed!

A recent leak has spilled the beans on Google’s upcoming plans for Gemini, its flagship AI model. It looks like Google is preparing to roll out different subscription tiers, offering users varying levels of access and capabilities. What does this mean for the future of AI access and affordability?
The leaked information suggests that Google will offer a free tier, likely with limited features and processing power, as well as several paid tiers with increasing capabilities and priority access to Gemini’s most advanced features. This tiered approach aims to cater to a wide range of users, from casual users to professional developers.
Subscription Tiers: What We Know
While the exact details are still under wraps, here’s what the leaked information suggests about the potential subscription tiers:
- Free Tier: Basic access to Gemini, likely with usage limits and slower processing speeds.
- Standard Tier: Increased usage limits, faster processing, and access to more features.
- Premium Tier: Priority access to the most advanced Gemini features, dedicated support, and potentially exclusive tools.
- Enterprise Tier: Custom solutions, large-scale deployments, and dedicated account management for businesses.
Why a Tiered Approach?
Google’s decision to offer tiered subscriptions is likely driven by several factors:
- Revenue Generation: Monetizing Gemini to offset the significant costs of developing and maintaining the AI model.
- Resource Management: Allocating resources based on user needs and preventing overload on the system.
- Market Segmentation: Catering to a diverse range of users with varying needs and budgets.
The Implications for Users
The tiered subscription model could have significant implications for users:
- Accessibility: The free tier will provide basic access to AI for everyone, regardless of their budget.
- Value for Money: Users will need to carefully consider which tier offers the best value for their specific needs.
- Competitive Landscape: Google’s pricing strategy could influence how other AI providers structure their offerings.
The Future of AI Pricing
Google’s tiered subscription model for Gemini could be a sign of things to come in the AI industry. As AI models become more powerful and ubiquitous, providers will need to find sustainable ways to monetize their technology while ensuring accessibility for all users.
Our Take
Okay, so Google’s going the subscription route with Gemini – color me not surprised. The real question is, how much will the good stuff cost? A tiered model makes sense, but Google’s got to nail the pricing sweet spot. If the free tier is too limited, or the premium tier is too expensive, it could backfire. This could signal a sea-change in how AI is provided to us, though – so keep a very close eye on this!
This story was originally featured on Forbes.
Engines & LLMs
Grok Gets a Voice: Is It the Future of AI Assistants?

Elon Musk’s xAI has just given its Grok AI chatbot a voice, stepping into the increasingly crowded ring of voice-enabled AI assistants. Now, you can chat with Grok like you would with Siri or Alexa, adding a new layer of interaction to the platform.
This update brings Grok closer to becoming a truly hands-free assistant, allowing users to ask questions, get information, and even generate creative content without typing a single word. But how does it stack up against the competition?
Grok Joins the Voice Revolution
The voice feature is rolling out to premium X (formerly Twitter) subscribers, giving them early access to this new way of interacting with the AI. To use it, you’ll need to be a premium subscriber and have the latest version of the X app. Then, simply tap the voice icon and start talking.
According to xAI, the voice mode is designed to be “conversational and engaging,” offering a more natural and intuitive way to interact with the AI. It’s not just about asking questions and getting answers; it’s about having a back-and-forth conversation with a digital companion.
What Can You Do with Voice-Enabled Grok?
The possibilities are vast, but here are a few examples:
- Hands-Free Information: Get news updates, weather reports, or quick facts without lifting a finger.
- Creative Brainstorming: Bounce ideas off Grok and get real-time feedback.
- On-the-Go Assistance: Ask for directions, set reminders, or manage your to-do list while you’re on the move.
- Entertainment and Chat: Have a casual conversation with Grok about your favorite topics.
The Competition: A Crowded Field
Grok is entering a market already dominated by established players like Siri, Alexa, and Google Assistant. These platforms have years of experience and vast ecosystems of connected devices. To succeed, Grok will need to offer something unique and compelling.
One potential advantage is Grok’s integration with X, giving it access to real-time information and social trends. Another is Elon Musk’s vision for Grok as a more irreverent and opinionated AI, which could appeal to users looking for a different kind of digital assistant.
Is Grok’s Voice the Future?
Whether Grok’s voice mode will be a game-changer remains to be seen. It will depend on factors like the quality of the voice recognition, the naturalness of the conversations, and the overall usefulness of the assistant. However, it’s clear that voice is becoming an increasingly important part of the AI landscape, and Grok is positioning itself to be a key player.
Our Take
Okay, Grok getting voice capabilities is a major move, not just a minor feature bump! The competition to create the ultimate AI voice assistant is fierce, and the potential rewards are massive. The company who cracks this nut will immediately create a significant advantage.
Honestly, having tested Grok voice already, I can say that it is very impressive! This one is worth watching closely to see what happens next.
This story was originally featured on Lifehacker.
Engines & LLMs
Could AI Pick the Next Pope? Tech Struggles with Vatican’s Secrets

The selection of the next Pope is one of the most closely guarded and tradition-steeped processes in the world. Could artificial intelligence, with its ability to analyze vast datasets and identify patterns, crack the code and predict the outcome of the next papal conclave? The answer, it turns out, is more complicated than a simple yes or no.
Recent experiments pitting AI models like ChatGPT, Elon Musk’s Grok, and Google’s Gemini against the Vatican riddle reveal a surprising weakness: these powerful tools struggle with the nuances, historical context, and deeply human factors that influence the selection of a new pontiff. While AI can process information about potential candidates, their backgrounds, and theological positions, it falters when faced with the intangible elements that often sway the College of Cardinals.
The Challenge of Predicting the Unpredictable
Predicting the next Pope is far from a purely data-driven exercise. It involves navigating a complex web of:
- Theological Debates: Shifting currents within the Catholic Church and differing interpretations of doctrine.
- Geopolitical Considerations: The desire for a Pope who can effectively address global challenges and represent diverse regions.
- Personal Relationships and Alliances: The intricate network of connections among the Cardinals themselves.
- Divine Intervention (according to some): The belief that the Holy Spirit guides the selection process.
These factors, often subjective and difficult to quantify, present a significant hurdle for AI algorithms.
AI’s Limitations: Missing the Human Element
While AI can analyze biographical data, track voting patterns (from past conclaves), and identify potential frontrunners, it lacks the capacity to understand:
- The “X Factor”: The charismatic qualities and spiritual depth that can resonate with the Cardinals.
- Behind-the-Scenes Negotiations: The private discussions and compromises that shape the outcome.
- The Mood of the Moment: The prevailing sentiment among the Cardinals at the time of the conclave.
As one Vatican insider noted, “The election of a Pope is not a rational process. It’s a deeply spiritual and human one.”
What AI Can Offer: A Starting Point for Analysis
Despite its limitations, AI can still play a role in understanding the papal selection process. It can:
- Identify Potential Candidates: Based on factors like age, experience, and theological views.
- Analyze Trends and Patterns: Revealing potential shifts in the Church’s priorities.
- Provide Contextual Information: Offering background on the challenges facing the Catholic Church.
However, it’s crucial to remember that AI’s insights are merely a starting point, not a definitive prediction.
The Verdict: AI as a Tool, Not a Prophet
While AI can offer valuable insights into the dynamics of the Catholic Church and the profiles of potential papal candidates, it cannot replace the human judgment and spiritual discernment that ultimately determine the selection of the next Pope. The Vatican’s secrets, for now, remain safe from the prying eyes of artificial intelligence.
Our Take
This article highlights the crucial limitations of AI in understanding complex human systems. While AI excels at processing data and identifying patterns, it struggles with the intangible factors that drive human behavior and decision-making, especially in a context as steeped in tradition and spirituality as the papal conclave.
The fact that leading AI models falter when faced with the Vatican riddle underscores the importance of critical thinking and human expertise. AI can be a valuable tool for analysis, but it should never be mistaken for a crystal ball. In a world increasingly reliant on algorithms, it’s a reminder that some things remain beyond the reach of artificial intelligence.
It raises an interesting question – does this make certain jobs and decision making processes safe from replacement by AI, and if so, what are the key criteria? Deep rooted human relationships and a solid, yet adaptable moral compass seems to be key!
This story was originally featured on South China Morning Post.
-
News, Ethics & Drama1 month ago
So, Why’s OpenAI Suddenly Sharing? It’s Not Just About the Competition
-
Prompted Pixels1 month ago
Cute Severance Action Figures
-
News, Ethics & Drama1 month ago
Gemini 2.5 Pro Goes Free: Google’s AI Game-Changer Unleashed
-
Writing1 month ago
Sudowrite Review (2025): Can It Match Human Writing?
-
News, Ethics & Drama4 weeks ago
Elon Musk’s xAI Acquires X: The Future of AI and Social Media Collide
-
News, Ethics & Drama1 month ago
Surveys Say Students Are Hooked on AI, But Learning’s Taking a Hit
-
News, Ethics & Drama3 months ago
OpenAI Unveils GPT-4.5: A Focus on Conversational Nuance
-
Video Vibes4 weeks ago
Surreal Cityscape – SORA Video by Jesse Koivukoski