© Freepik
In a month where OpenAI CEO Sam Altman claimed AI superintelligence is "a few thousand days away", Joe Biden addressed the UN stating "we have a responsibility to prepare our citizens for the future" in regards to AI.
Here is an overview of this month's biggest AI news:
1. Open AI release ChatGPT o1 preview
A significant step forward in achieving the coveted status of Artificial General Intelligence (AGI), ChatGPT's o1 preview features much more advanced reasoning capabilities. When the Large Language Model (LLM) was first introduced, its responses were often peculiar and frequently incorrect.
Now, the model takes more time to "think" before producing responses, ensuring they are more logical and reasoned – similar to how a human would process information. Sam Altman has stated that the company has now reached "Level two" of the five stages of Artificial Intelligence, which is a huge statement that not enough people are talking about. The levels go as follows:
- Level 1: Chatbots, AI with conversation language
- Level 2: Reasoners, human-level problem solving
- Level 3: Agents, systems that can take action
- Level 4: Innovators, AI that can do the work of an organisation
- Level 5: Organisations, AI that can do the work of an organisation
2. Meta announce a whole host of new toys
Of course Mr Zuckerberg isn't missing out on the party, at Meta’s annual Connect conference, the company showcased several AI advancements that will directly impact how we communicate and engage with social media. Meta plans to integrate AI features across all of its platforms – Facebook, WhatsApp, and Instagram.
Expect more image-generation prompts in your feed, the ability to interact with celebrities like Judi Dench and John Cena (among others) via upgraded chatbots, and perhaps the biggest announcement: the release of their latest LLM, Llama 3.2, their first model that can process both text and image.
Additionally, Meta has significantly upgraded their Ray-Ban smart glasses with AI capabilities. These glasses can now remember where you placed objects and guide you to them. They can also translate conversations in real-time with someone speaking a foreign language. The future is now!
Watch a recap of all of their announcements (including advanced Augmented Reality glasses) here.
3. EA reveal new age of gaming
It's probably every gamer's dream to simply describe a game and then step into their own bespoke world, filled with characters they’ve imagined and brought to life. Well, that dream is now a reality. What once took years of development, countless teams of programmers, artists, and designers, can now be achieved in mere moments.
Gone are the days when coding a game required expertise in multiple fields! Today, you and your friends can enter simple prompts, and watch as an entire game is created in real-time, tailored to your desires.
EA has aptly titled this new feature 'Imagination to Creation,' making it an incredibly exciting time for gamers everywhere. Watch their demonstration of the new feature here.
4. Kling AI launches v1.5 model with new motion brush
Can you remember some of the first AI videos where Will Smith was seen eating spaghetti terrifyingly (watch here)? We've come a long way since then, and that was only early last year! Kling AI is a text-to-video generation model that, with its latest update, now allows you to create 1080p HD videos. You can utilise more complex prompts and specify the motion of individual elements within an image, offering precise control over animations with their new motion brush feature.
Click video to watch:
5. Runway's new video-to-video model
With the power of runway's new Gen-3 version of their video creating tool, you can now transport yourself seamlessly from your back garden to the top of mount Everest or make everything look like it's made out of thick wool, because why not! The possibilities are endless and the jump from text-to-video to video-to-video is very significant in the world of film production.
Beginning an AI video prompt with a video fundamentally changes the approach compared to starting with an image. It allows you to define the motion first, using AI to enhance the design and aesthetics afterward. As opposed to starting with an image, which means you establish the aesthetic, leaving the motion to be dictated by the AI. See for yourself!
Click video to watch:
6. AI fever hitting Luxembourg
This month, the AI powered virtual stylist 'Essembl', in collaboration with LetzAI, teased us with an exciting new feature that allows users to add their own AI avatar to the app.
With this update, you can visualise yourself wearing clothes you’re considering purchasing, as well as outfits you already own but aren’t sure how to combine them and see what they’ll look like on you. Essembl is bridging the gap between virtual and physical shopping, adding a new level of personalisation to a very saturated market. The feature should be available to use before the end of the year.
Click video to watch
That's some of the biggest stories of this month and I will leave you with a quote by the CEO of NVIDIA (the dominant supplier of AI hardware) who this month said: "We have reached the point where AI is designing new AI, and the progress in the next two years will be spectacular and surprising". That's why keeping an eye on AI news should be something that we all do! (Watch the discussion here).