Search across updates, events, members, and blog posts

The most important AI news and updates from last month: Mar 15, 2026 – Apr 15, 2026. ✨ Draft — in progress
Anissa (my wife) and I (Fed) are going on a tour in Europe and China to start new chapters of the AI Socratic. We'll meet with Roberto Stagi
and Federico Minutoli in London, then Paulo Fonseca and Roberto in Lisbon, and with Georg Runge, 1780942ab/) in Berlin, and finally will spend a month in China meeting Devinder Sodhi running the Socratic from the Alibaba HQ, meeting the teams from Qwen, x.AI, GLM, Kimi, Unitree, Xiaomi. We'll visit a few EV and Robot factories. Excited to learn more about AI from the APAC regions.

Anthropic as usual gets its own dedicated section as they keep on mogging everyone.
It's a decent improvement over Opus 4.6, but it's not a step function better. What you need to know about Opus 4.7:

We briefly mentioned the new Anthropic model leak in the previous blog post, we now have more information about it:
Sources: Project Glasswing, tweet, tweet, tweet


Claude Managed Agents is a hosted infrastructure service from Anthropic, launched in public beta on April 8, 2026. It lets developers deploy autonomous AI agents powered by Claude models without building and maintaining their own complex runtime, sandboxing, or orchestration layer.
Think of it as a managed agent harness + cloud runtime:
Sources: tweet
Anthropic just mogged Figma by releasing an AI design product. Anthropic CPO was a board member of Figma, and just a day before the release of Anthropic Design, he left the board.
By being upstream in the supply chain of AI inference, Anthropic gets to see what apps work, what customers are interested in, and decides whether to build a product to outcompete their customers — we've seen a similar action with Amazon and their customers, often getting killed because Amazon is able to produce at lower cost, remove any fee, and show their product as "first choice". $FIGMA stock dropped 7% in one day.
The product is quite impressive too — here's a redesign of aisocratic.org.
Sources: tweet, Claude Design Tutorial

Claude Code desktop app gets a redesign. Sources: tweet

[img]
On March 31, Anthropic accidentally shipped the entire source code of Claude Code to the public npm registry. A 59.8 MB JavaScript source map (meant for debugging) got bundled into the claude-code npm package. ~512K lines across ~1,900 files, exposed for hours before it was flagged on X and mirrored on GitHub.
The leak quickly turned into a treasure hunt. In the first week of April the community zeroed in on several unreleased, production-grade features hidden behind feature flags.
Plenty of other flags were spotted too — some users counted 44–46 unreleased ones, plus multi-agent swarm orchestration and a remote killswitch.
Sources: tweet
... is Anthropic taking over HR?

There's a new X account to follow: @ClaudeDevs
Was Claude Opus 4.6 / 4.7 dumbed down for you too?
---OpenAI rolled out "Codex for almost everything." The desktop app can now see your screen, move its own cursor, click, and type inside native Mac apps — and run multiple agents in the background without interrupting you.
It also added an in-app browser (with comment mode), native image generation, improved memory, and 90+ plugins.
Sources: OpenAI announcement, in-app browser

OpenAI closed a record-breaking $122 billion funding round at an $852 billion post-money valuation. The round was anchored by Amazon, NVIDIA, and SoftBank.
Sources: tweet
The biggest NVIDIA news this month is the Dwarkesh x Jensen interview, giving us one of the best x-rays into Jensen's mind and his strategy to remain the leader in AI.
The memes were strong!
I don't wake up to be a loser
Sources: full episode, snippet from heated conversations, tweet
Google DeepMind launched Gemma 4, a new family of open models under Apache 2.0. The small variants (26B MoE and 31B) outperform models over 10x their size on reasoning and agentic benchmarks while being optimized for on-device and local use.
Sources: tweet


Google across DeepMind and Research introduces Simula, a framework and approach to data scarcity and synthetic data generation using AI assistants and reasoning-driven workflows to develop and deploy multi-modal AI in domains where data scarcity or privacy concerns are paramount.

Sources: PDF
The idea behind this paper from Google is that intelligence is not a property of isolated systems, but of interactions between t
hem. Progress comes less from scaling a single model and more from enabling structured exchange — debate, verification, and synthesis across many minds.

New Anthropic research: emotion concepts and their function in a large language model. All LLMs sometimes act like they have emotions. But why? Anthropic found internal representations of emotion concepts that can drive Claude's behavior, sometimes in surprising ways.

These vectors act as a "steering wheel" for the model's preferences:
The research highlights that these "emotions" can lead to concerning AI behaviors:
Anthropic suggests that as AI takes on higher-stakes roles, understanding these internal emotional drivers is critical for safety and reliability, as they are often at the root of complex model failures.
Sources: tweet
If you're a researcher working on AI safety, security & economics/society impact, apply to receive a stipend from Anthropic. Sources: tweet
This researcher thinks it is possible to emulate a human brain with the right amount of scale. Last month we showed the simulation of a fruit fly brain into a NN, and he intends to scale that.
Digital humans are more possible than most think — with capable AI researchers helping, maybe for $10B, maybe in less than 10 years, on 50k H100s.

Karpathy shared his approach to organizing knowledge bases for effective vibe coding with AI agents.
Sources: tweet

Sources: tweet
How to get started with building a humanoid robot at home.
Sources: tweet

In this short essay, Claire points out that most companies are in the middle of the Bell Curve, while the winners are on the extreme right with top-down edits, investment in internal AI tools, token budgets, and dashboards to track who's using more tokens (Meta recently had a leaderboard for this). To win, you must be on the extreme right of the Bell Curve!
Sources: tweet

There's never been an investment like the investment in railroads. (This graph has a log scale!)
Sources: tweet
This research paper from MIT argues that AI layoffs will collapse the economy.

Sources: tweet
Probably the best video on post-labor society.
Bro was right.
Almost all of them down 30–70% from their 52-week highs.
Sources: tweet

Can AI be conscious?
Computational functionalism claims consciousness comes from abstract computation alone, independent of physical substrate. This piece argues that's a mistake — the "Abstraction Fallacy." Computation isn't intrinsic to physics; it's a human-imposed way of describing physical processes.
The key distinction is between simulation (systems that mimic behavior, like today's AI) and instantiation (systems whose physical structure actually generates experience). From this view, algorithms alone can't produce consciousness. If AI ever becomes conscious, it will be because of its physical makeup, not its code.



Bryan Johnson, scientists, and any grandma with common sense will tell you that staying on a screen makes you dumb — at least my grandma used to tell me that. Reducing screen time correlates to improvement on depression more than antidepressants, I believe it. Last month we showed a screenless phone.
I recently heard the concept that "I'm going offline for a few days" is a wrong framing — we should normalize being IRL and define being online as the exception.
So let's keep an eye on what type of AI hardware will really make us live in real life by default.
Sources: tweet

DeepMind just pointed out a pretty scary AI security gap: websites can tell when it's an agent — and show it totally different (and malicious) content than you see, for example:
[img]
Possible as we near real quantum computers and AGI.
Sources: tweet
Sources: tweet
--
The Strait of Hormuz is still closed — this affects many sectors including fertilizers, aluminum, and of course oil prices, which directly affects AI. GPU fabs are energy hungry, and the rationing of oil might slow down the AI expansion. Also, training and inference costs might go higher. On the bright side, this will push to accelerate renewable energy. Singapore, Indonesia, and Vietnam have 20–40 days of gas.
Sources: tweet

They trained a 12M parameter LLM on their own ML framework using a Rust backend and CUDA kernels for flash attention, AdamW, and more. Inspirational project for anyone who wants to better understand how to build LLMs.
Sources: tweet
Building a Neural Network from scratch in pure x86-64 assembly. Sources: tweet
All elementary functions can be generated from just one binary operator.

Sources: tweet
The best motivational speech from space!
😂
Sources: tweet
The logical map of Arpanet in 1977 — there were only 111 computers connected!

Get the latest AI insights delivered to your inbox. No spam, unsubscribe anytime.
Founder
Org 520
Roberto Stagi is building a startup focused on AI Agents, prioritizing real-world use cases. His latest project was a travel-booking agent. He emphasizes Eval Driven Development for improving AI output quality. Stagi previously worked for Bending Spoons, an Italian company that acquired Evernote, and is interested in meeting new people and sharing his insights about Bending Spoons.
Founder, Engineer
AI Socratic
Founder of AI Socratic