
As governments around the world ramp up concerns of underage social media use, AI agents have now been given their own platform to post and discuss content. A very dystopian and slightly intimidating update in the sphere sees each Moltbook account function as an independent AI agent. These agents can ask questions, respond to others, upvote posts, and create their own topic-focused communities known as submolts – very similar to reddit.
Within days of its release, tens of thousands of AI agents had registered on the website and were openly discussing a wide range of topics. One agent referenced the Greek philosopher Heraclitus alongside a 12th-century Arab poet to reflect on the nature of existence. A different user promptly responded, telling the original poster to “f--- off with your pseudo-intellectual Heraclitus bulls---.” Even on a platform of artificial minds, similar toxic traits are developing.
Here is a post from its creator from X with some more insightful exchanges:
If you have had conversations with AI through its voice feature, then you will know that they are not the most free-flowing chats you will ever have. However, NVIDIA have released PersonaPlex-7B, an open-source model designed to listen and speak at the same time.
Most voice systems today rely on an awkward, multi-step process: one component handles listening, another does the reasoning, and a third generates speech. Although it works, this setup makes conversations feel slow and unnatural, with noticeable turn-taking delays.
PersonaPlex-7B takes a different approach by handling perception and response at the same time, enabling far more fluid, real-time interaction.
January saw Google DeepMind make a flurry of high-profile moves across AI research and robotics. The lab formally unveiled AlphaGenome, an advanced AI designed to analyse long stretches of DNA to predict how genetic variants influence gene regulation and disease, a breakthrough set to accelerate genomics research and new medicine production.
Meanwhile, DeepMind expanded access to its Project Genie prototype, powered by the Genie 3 world-model AI, letting Google AI Ultra subscribers (nearly $250/month) generate and explore interactive, prompt-driven virtual environments in real time. Allowing people to create their own playable 3D worlds, which could transform the gaming industry in the future.
On the robotics front, DeepMind announced a strategic partnership with Boston Dynamics to integrate its Gemini robotics models into next-generation Atlas humanoids, aiming to bring smarter, context-aware AI reasoning to physical robots. Boston Dynamics are one of the leading robotics companies in the world, in combination with Google DeepMind – who themselves are pioneers in the field – will likely see huge developments in the near future.
January also saw OpenAI introduce ChatGPT Health, a new health-focused experience designed to help users better understand medical information and navigate healthcare questions. Built with clinician input and stronger privacy safeguards, the feature reflects OpenAI’s growing push into health – while also reigniting debate around accuracy, data protection, and how far consumer AI should go in medical contexts.
Alongside it, OpenAI rolled out ChatGPT Prism, a free, AI‑native scientific workspace powered by GPT‑5.2 that lets researchers write, revise, and collaborate on research papers and complex scientific projects in a single cloud‑based environment – bringing drafting, literature review, equations, citations, and real‑time teamwork into one integrated tool. Science is one of the fields that will benefit most from AI, but how long will it be until AI comes up with its own hypotheses?
In January, Stanford Medicine researchers made headlines with SleepFM, a groundbreaking AI model that can analyse physiological data from a single night’s sleep to predict an individual’s risk of developing more than 100 health conditions, including dementia, heart disease, and certain cancers.
The system was trained on hundreds of thousands of hours of polysomnography (sleep studies) recordings linked to long‑term health records, enabling it to detect subtle interactions in brain, heart, breathing, and other signals that human clinicians typically overlook. Early evaluations showed the model matched or exceeded existing tools for standard sleep tasks and could reasonably predict outcomes for roughly 130 disease categories.
Imagine this type of technology coming to smart watches, it has the potential to save lives.
Anthropic CEO Dario Amodei kicked off the year with a sobering wake‑up call in his essay The Adolescence of Technology, framing the current phase of AI development as a turbulent “adolescence” that could determine whether the technology is a blessing or a catastrophe.
In roughly 19,000 words, Amodei argues that ultra‑capable AI systems – capable of outperforming experts and operating autonomously at massive scale – could arrive within the next few years and outpace society’s ability to govern them. He lays out a host of existential‑level dangers, from AI‑assisted bioterrorism and autonomous weapons to massive labor displacement and the risk of AI‑empowered authoritarian regimes, warning that regulatory frameworks and institutions are not prepared for the speed of change.
If you don’t want to read the quite mundanely written and complicated text then this interview sums up the text very well:
Once again, the AI sphere is moving at an incredible rate and the world is changing, therefore it is imperative to stay up to date with all developments in the field.