
Each week, we bring you 5 stories that resonated the most in our internal Slack channel #AI-news. We write the newsletter using various AI tools because we're an AI company and our marketing wants to move with the times too. 😎
Today you're reading the 69th issue in a row.
#1
Claude 4 by Anthropic 🧠: A powerful AI assistant for research and coding 💻.
Anthropic 🧠 has unveiled Claude 4—its most powerful AI model to date 🚀. The update includes two versions: Opus 4 and Sonnet 4 🔁, both designed to handle more complex and long-running tasks 📊📚. Claude is no longer just for casual conversation—it can analyze large datasets 📈, write full-length documents 📝, and even build complete software projects from scratch 💻.
According to Anthropic, Claude Opus 4 is currently the best programming model in the world 🥇👨💻. It can work autonomously for around seven hours ⏱️, making it a highly capable digital assistant 🤖. Unlike previous versions focused mainly on chatting 💬, Claude 4 is built to actually get work done—whether it’s research 🔬, writing, or coding.
This shift reflects a broader trend in artificial intelligence 🌍—from novelty to real-world utility ⚙️. Anthropic’s revenue doubled in just one quarter 📈💸, with annual revenue now reaching $2 billion 💰. Investor interest is growing 📊, and Claude 4 shows that AI today is far more than just a Q&A machine 🧠✨.
#2
Veo 3 by Google 🎬: AI-generated videos that look real 🤖.
Google has unveiled Veo 3—a new tool for generating videos using artificial intelligence. The clips are so realistic that people often can’t tell they’re not real footage. The AI handles synchronized dialogue 🗣️, natural movements 🖐️, and coherent storylines across scenes.
Creators are already using Veo 3 to bring to life ideas that would otherwise take a lot of time and money. But alongside the excitement, concerns are rising: if AI can generate actors who never existed, what does that mean for real artists 🎭 and copyright laws?
Veo 3 is currently available only in the U.S. 🇺🇸 and only to premium Google AI users—at a price of $249 per month. The tool is powerful, but it also raises serious questions about the future of creative production.
#3
Nvidia and UAE 🇦🇪: Building Europe’s largest AI campus in France 🇫🇷🔋!
Nvidia 🟩, UAE-based company MGX 🇦🇪, and a group of French partners 🇫🇷 are planning to build the largest AI campus in Europe, just outside Paris. The campus is expected to open in 2028 📅 and will deliver 1.4 gigawatts of power ⚡. It will feature low-carbon data centers ♻️, exascale computing power, and support for a so-called sovereign cloud ☁️.
This isn’t just a massive tech project—it aims to accelerate the adoption of AI in sectors like healthcare 🏥, energy, and manufacturing 🏭, while also strengthening Europe’s digital and climate independence 🌱.
The project is backed by major players including Mistral AI, France’s national investment bank, and leading technical universities 🎓.
This initiative marks a big leap forward for France’s AI ambitions. It’s designed to position the country as a serious player on the global AI stage 🌐 and help balance the influence of the U.S. 🇺🇸 and China 🇨🇳. It also highlights the growing collaboration between France and the UAE in shaping a joint AI strategy 🤝.
#4
Elon’s AI in Washington 🇺🇸: Grok making its way into federal agencies 🕵️.
Elon Musk’s team 🧠 is reportedly using the Grok AI chatbot 🤖 within U.S. federal agencies 🏛️ for data analysis—even though the tool hasn’t received official approval for such use ⚠️. Critics warn that this could violate privacy laws 🔐, security protocols 🛡️, and ethical standards ⚖️—especially when dealing with sensitive information 🧾.
Grok, developed by Musk’s company xAI 💡, is allegedly being promoted inside agencies like the Department of Homeland Security 🇺🇸 by a group called DOGE 🐶 (Department of Government Efficiency)—without proper authorization 🚫. Experts caution that this could give Musk access to government data 📊 and an unfair edge in future federal AI contracts 📈.
Concerns are also growing 😟 over transparency 🪟, potential political misuse 🏛️, and lack of oversight 👀. Reports suggest that DOGE may even be testing tools to monitor federal employee loyalty 🕵️—which could likely violate civil service laws 📉. The entire situation raises major questions ❓ about who controls the use of AI tools within government institutions 🧭—and how to prevent them from being exploited for personal or corporate gain 🏢.
#5
Klarna and Zoom 🧑💻: CEOs replaced themselves with AI avatars during earnings presentations 🤖📊!
This week, the CEOs of Klarna 💳 and Zoom 📹 used AI versions of themselves to present their companies’ quarterly results 📈. Klarna’s CEO submitted a pre-recorded video 🎥 where his AI clone 🤖 did the presenting for him. Zoom CEO Eric Yuan used his own digital avatar 🧑💻, built with Zoom’s in-house tools 🛠️.
Both executives did join the Q&A sessions ❓ with journalists and investors 💼, but the message was clear: AI is no longer just a behind-the-scenes tool—it’s stepping into the spotlight 🧠✨ and becoming part of public appearances 🎤. Klarna has previously used AI to cut its workforce 🔻👥, and Zoom is actively promoting the idea of “digital twins” 🪞 that could attend meetings on your behalf 📅.
It feels a bit strange 🤔, but also makes sense 🧩: the people building automation tools are now experimenting with automating themselves—at least for the less engaging parts of their jobs 🤷♂️💻.