The most important AI news and updates from last month: May 15 - June 15.
Sign up to receive the mailing list!
AI Dinner 11.0
The next AI dinner will be on June 18th, and it will be hosted in the Arize.ai office.

We'll discuss the top news and updates from this blog post using the Socratic methodolgy. As well as going through few presentations:
-
Kerem Kazan will present "Commenting on chess moves in natural language (training LLM via RL to play chess)".
-
Jianing Qi will present the paper Learning to Reason Across Parallel Samples for LLM Reasoning. Do RL reasoning on top of the model outputs with a tiny LLM. Is RL optimizing output distribution or learning new abilities?
AI.Engineer SF Conference
Despite numerous snafus—such as poor to nonexistent internet, big screens malfunctioning, unclear ticket access, general organizational issues, insufficient food for attendees, and high prices, and most speakers were primarily promoting their products. Still, this conference turned out to be one of my favorite events of the year and attracted an incredibly high level of talent.

Jory Pestorious wrote this fantastic summary of the top insights from the ai engineer conference:
- Competition is about clear ideas; everyone can build today, so code and models are not a moat anymore.
- Engineering excellence equals articulation excellence. Write specs that humans understand and that AI can execute.
- Engineers need to learn how to use AI tools to 10x their output, or they’ll be left behind — harsh reality.
- No code should go unreviewed; code debt grows faster than AI can fix it.
- Claude code 🔥
- MCP is becoming the standard; time to fully embrace it.
Link: http://jorypestorious.com/blog/ai-engineer-spec.
Another interesting report comes from Thomas Gear:
- AI coding complexity doubles every 70 days
- 50% of engineers use LLMs
- RAG leads customization at 70%
- costs dropped 600x (to $0.10/million tokens)
- models shrink (405B to 24B) while staying powerful
- Gemini’s market share jumped to 35% with 50x inference growth
- companies now design for AI to handle 80% of work.
Link: https://x.com/tg\_bytes/status/1931938102861271042
Here’s the recording of the general track. All the advanced talks in the RL track were just incredible and made the conference worth it.
https://www.youtube.com/watch?v=z4zXicOAF28&t=18946s
The highlight of the tech talk for me was Dan Han from Unsloth — a RL beast! https://x.com/danielhanchen/status/1930752903960211608.
Antropic: How we built our multi-agent research system
Anthropic shares how they built Claude's new multi-agent Research feature, an architecture where a lead Claude agent spawns and coordinates subagents to explore complex queries in parallel. They use this orchestrator-worker architecture:

Traditional approaches using Retrieval Augmented Generation (RAG) use static retrieval. That is, they fetch some set of chunks that are most similar to an input query and use these chunks to generate a response. Anthropic Advanced Research architecture uses a multi-step search that dynamically finds relevant information in parallel, adapts to new findings, and analyzes results to formulate high-quality answers.

Token-efficient Scaling Performance gains correlate strongly with token usage and parallel tool calls. By distributing work across multiple agents and context windows, Claude’s system scales reasoning capacity efficiently. However, this comes with a 15× token cost over standard chats, making it suitable for high-value queries only.
- Think like your agents.
- Teach the orchestrator how to delegate.
- Scale effort to query complexity.
- Tools design and selection are critical. MCP servers gives tools access on steroids.
- Let agents improve themselves. The agents can diagnose when something fails, and fix it by rewriting the MCP description. This process saved 40% times.
- Start wide, narrow down.
- Guide the thinking process.
- Parallel tool calling transforms speed and performance. Parallelism can cut up to 90% of the total time.
Flexible Evaluation + Production Reliability Anthropic uses LLM-as-judge scoring with rubrics for factuality, citation, and efficiency, alongside human testing to catch subtle failures. For reliability, they built resumable stateful agents with checkpointing, rainbow deployments, and full observability of agent decision traces, crucial for debugging non-deterministic, long-running agents.
Blog: https://anthropic.com/engineering/built-multi-agent-research-system
Tweet: https://x.com/omarsar0/status/1933941545675206936.

https://x.com/swyx/status/1933981734456230190
Claude Code CLI 🔥
AI coding tooling & coding agents being packaged into products, and even worse, cloud products, is the wrong path. Command Line is the way!
Tutorial on how to use it: https://x.com/rasmickyy/status/1931078993022730248
Google keeps on shipping
Currently Google is the best vertically integrated company. Google simply does not get enough credit for the TPU, as one of the reason to be one of the top player in AI.

https://x.com/ArtificialAnlys/status/1933254125757870104
With the launch of Gemini 2.5 Pro they're now only second to OpenAI o3-pro.

Google have been constantly shipping

Veo3 is now positioned #2 in the best video models

OpenAI leads with o3-pro
o3 pro performs just like or better than o3 in most benchmarks including ARC1 and ARC2. What's incredible about it, is the cut in cost by 80%, basically costing as much as 4o-mini!

https://x.com/ArtificialAnlys/status/1932489573462081898
Twitter did what Twitter does, speculating on o3 using distillation, but some insider says that OpenAI have been using Codex internally to optimize the heck out of it, obtaining the incredible 80% without performance losses.
OpenAI retention curve is a wet dream for most investors. Their 1 month retention has skyrocketed from <60% 2yrs ago to an unprecedented ~90%! Youtube was best-in-class with ~85%. 6mo retention is trending to ~80%. Rapidly rising smile curve.

https://x.com/deedydas/status/1932619060057084193
Meta takes 49% take in Scale AI for $14.3 Billions
After the acquisition Alexandr will take the role of head of AI at Meta. Some drama is already unraveling saying he's toxic and only great at fundraising. We shall see.

Meta is currently offering $2M+/yr in offers for AI talent and still losing them to OpenAI and Anthropic (who has 80% retention rate). Source.
Mistral Releases Magistral RL Model
The Mistral team at it again with Magistral! A reasoning model designed to excel in domain-specific, transparent, and multilingual reasoning.
GRPO with edits:
1. Removed KL Divergence
2. Normalize by total length (Dr. GRPO style)
3. Minibatch normalization for advantages
4. Relaxing trust region

https://arxiv.org/pdf/2506.10910
Simon Wilson: all LLM API vendors are converging to the same product:
- Code execution: Python in a sandbox
- Web search — like Anthropic, Mistral seem to use Brave
- Document library aka hosted RAG
- Image generation (FLUX for Mistral)
- Model Context Protocol
Philosophy
How do you define intelligence? How do you define Life?
https://x.com/blaiseaguera/status/1924514755982606493
https://x.com/reedbndr/status/1927495304380559744
Sam Altman: The Gentle Singularity
"We are past the event horizon" this phrase might go into our future history books.

The gentle singularity is a short read, jam packed with philosophical takes and with some fun fact: ChatGPT queries uses 0.34 watt-hours (few minutes of a normal lightbulb) and a tea spoon of water.
Learning 📖
- OpenAI, Google, and Anthropic released the best guides on AI: https://x.com/CodeByPoonam/status/1932813119111508214
- Top 10 YouTube channels to learn AI from scratch: https://x.com/Hesamation/status/1927649769360412813
AI Builders And Tools ⚒️
-
Dia Browser is out, go get it: diabrowser.com.
Research Papers 🔬

This research paper from Apple is been quite controversial for several reasons, first of which, Apple lagging behind the AI race: https://machinelearning.apple.com/research/illusion-of-thinking.
Are reasoning models like o1/o3, DeepSeek-R1, and Claude 3.7 Sonnet really “thinking”? Or are they just throwing more compute towards pattern matching?
Apple designed an experiment using Tower of Hanoi to test these models. Well turns out, it was a memory problem, the models were failing because they were going out of context.
Asking the model to be more concise in fact enabled o3 to solve the Tower of Hanoi, as shown in the paper of The Illusion of the Illusion of Thinking — this paper also sign as the first time an LLM, Claude Opus, is listed as an author on arXiv.

https://x.com/rohanpaul_ai/status/1933296859730301353
LLMs are not the only one faking it either: The Illusion Of Human Thinking.
More papers
We introduce MEMOIR — a scalable framework for lifelong model editing that reliably rewrites thousands of facts sequentially using a residual memory module.
https://x.com/qinym710/status/1933514852313563228
What if an LLM could update its own weights? Meet SEAL: a framework where LLMs generate their own training data (self-edits) to update their weights in response to new inputs. Self-editing is learned via RL, using the updated model’s downstream performance as reward.
https://x.com/jyo_pari/status/1933350025284702697.
🔥 “Reasoning” features learnt by an SAEs (Sparce Auto Encoders) can be transferred as is across MODELS and datasets is super cool and similar in spirit to Mistral’s finding that there exists a low dim reasoning direction https://x.com/nrehiew_/status/1933308951334170712
🔥 SakanaAILabs: We’re excited to introduce Text-to-LoRA: a Hypernetwork that generates task-specific LLM adapters (LoRAs) based on a text description of the task. Catch our presentation at #ICML2025! https://x.com/SakanaAILabs/status/1932972420522230214
🔥 deedydas: DeepSeek just dropped the single best end-to-end paper on large model training. https://x.com/deedydas/status/1924512147947848039
🔥 deedydas: The BEST AI report in the world just dropped and I read all 340 pages so you don’t have to. https://x.com/deedydas/status/1929381310856151280
Videos
Terence Tao talks about the beauty of the hard math formulas of the century and how AI will help us solve them.
https://www.youtube.com/watch?v=HUkBz-cdB-k
Commercial AI videos enter the mainstream!
https://x.com/PJaccetturo/status/1932893260399456513
https://x.com/ROHKI/status/1931081752992477285
Full Sources List
There are way too many news, articles, papers, fun memes, and tweets, to write about them all. Here's the complete list, in case you wanted to explore what happened last month.
AGI
- elder_plinius: What comes after ASI? 🧐 https://x.com/elder_plinius/status/1933999301677711742
- 🔥 PeterDiamandis: Altman’s AI timeline sounds like sci-fi, but he’s dead serious. https://x.com/PeterDiamandis/status/1933691769364865235
- vitrupo: Demis Hassabis says slow, scientific AI development was always the plan. https://x.com/vitrupo/status/1933154059906851078
- vitrupo: Eric Schmidt says we don’t yet understand what AI will do to society – but resisting it isn’t an option. https://x.com/vitrupo/status/1932582853331857724
- PeterDiamandis: hmm, this is fair question…is our optimism privilege-driven? https://x.com/PeterDiamandis/status/1932484232112210286
- 🤔 rohanpaul_ai: I always love Eric Schmidt’s explanations of things. https://x.com/rohanpaul_ai/status/1932196959944835107
- davidasinclair: Most people die from aging. If we tackle that, we tackle almost everything. https://x.com/davidasinclair/status/1931016558475706519
- itsalexvacca: Humanity’s progress is accelerating insanely fast. https://x.com/itsalexvacca/status/1931006421585543286
- RafaRuizdeLira: My personal reflections on how I live my life given that AGI might be here in a few years. I argue that you should speed up your life like there are only a few years left to live, even though it’s extreme and deeply alienating. https://x.com/RafaRuizdeLira/status/1930971867252178958
- vitrupo: Demis Hassabis says we may need “universal high income” to distribute the productivity gains AI will generate. https://x.com/vitrupo/status/1930585425716166787
- scaling01: We have LLMs like Claude 4 Opus and some of you really think that you will still be employed in 5 years when an LLM does 10x the work you do in half the time? https://x.com/scaling01/status/1929264165538988302
- LRudL_: Most reactions to the impending AI automation of the economy are: https://x.com/LRudL_/status/1928605472132845620
- Scr0nkf1nkle: The Great AI Job Displacement Is Closer Than You Think. https://x.com/Scr0nkf1nkle/status/1928212693824967110
- Cernovich: I’m not a doomer and always been pro tech acceleration. Everyone I’m talking to who works in AI. Man. We are about to be hit by a freight train. https://x.com/Cernovich/status/1928179661231555034
- WesRothMoney: there’s an AI researcher who is saying that AGI is ‘almost here’ and we are beginning to see progress milestones toward superintelligence… https://x.com/WesRothMoney/status/1927951930573099060
- kimmonismus: Tyler Cowen is right: we should be concerned, we need to realize that nothing will stay the same. https://x.com/kimmonismus/status/1927047827609293029
- vitrupo: Eliezer Yudkowsky says the paperclip maximizer was never about paperclips. https://x.com/vitrupo/status/1927030654471987690
- kimmonismus: Things are not looking good for career starters. https://x.com/kimmonismus/status/1926956997745725722
- dwarkesh_sp: If you’re the leader of a country like India or Australia, and you’re AGI pilled, what should you do? https://x.com/dwarkesh_sp/status/1925960023324110986
- vitrupo: Anthropic’s Sholto Douglas says by 2027–28, it’s almost guaranteed that AI will automate nearly every white-collar job. https://x.com/vitrupo/status/1925718115842707534
- vitrupo: Satya Nadella says he cares far less about AGI benchmarks than about real-world impact. https://x.com/vitrupo/status/1925394775471268154
- DrTechlash: Meet the Center for AI Safety’s new national spokesperson, John Sherman. https://x.com/DrTechlash/status/1924639190958199115
- btibor91: “Anthropic fully expects to hit ASL-3 soon, perhaps imminently, and has already begun beefing up its safeguards in anticipation.” https://x.com/btibor91/status/1924575553505435958
AI Agents
- unwind_ai_: 9 huge AI agents, MCP, RAG, and LLM updates this week:
https://x.com/unwind_ai_/status/1931542601401856066 - AtomSilverman: A week in AI Agents is like a year in traditional software https://x.com/AtomSilverman/status/1931130007293972558
AI Builders
- 🔥 omarsar0: Anthropic is killing it with these technical posts. https://x.com/omarsar0/status/1933941545675206936
- btibor91: OpenAI has open-sourced a demo of a UI testing agent that uses the OpenAI Computer-Using Agent (CUA) model, the Responses API, and Playwright to automate frontend testing https://x.com/btibor91/status/1933061679367270918
- ChShersh: 40 Functional Programming concepts to learn. https://x.com/ChShersh/status/1932546930841972842
- cursor_ai: The o3 price drop is now reflected in Cursor! https://x.com/cursor_ai/status/1932484008816050492
- rohanpaul_ai: This paper systematically evaluates 14 prompting techniques across 10 Software Engineering tasks using four different LLMs. https://x.com/rohanpaul_ai/status/1932251768383320144
- aaditsh: Anthropic literally dropped the smartest one-pager on using AI at work https://x.com/aaditsh/status/1931953940599652801
- 🔥 MatthewWSiu: sharing pathfinder - a tool for exploring the space between two concepts https://x.com/MatthewWSiu/status/1931855704320811108
- didier_lopes: I still can’t believe this. https://x.com/didier_lopes/status/1931808534565437751
- mdancho84: Top 10 Python Libraries for Generative AI You Need to Master in 2025 https://x.com/mdancho84/status/1931674185136210308
- Supermemoryai: We’ve open-sourced the supermemory MCP. https://x.com/Supermemoryai/status/1931508487420588534
- 🔥 jxnlco: RAG is overrated. Reports are the real game-changer. https://x.com/jxnlco/status/1931411011447226804
- 🔥 danielhanchen: Slides for my @aiDotEngineer Advanced Reinforcement Learning, Kernels, Reasoning, Quantization & Agents workshop are at GRPO Colab tutorial: There’s around ~80 slides - recording should be up in a few months! https://x.com/danielhanchen/status/1930752903960211608
- _catwu: Since we originally built Claude Code as an internal tool, we’ve heard a ton of questions about how our teams use it at Anthropic. https://x.com/_catwu/status/1930703532715626587
- badlogicgames: A new entry to my popular series “LLM tools for plebs”: claude-trace https://x.com/badlogicgames/status/1929312803799576757
- hrishioa: I decompiled Claude Code from just the minified code. Took me 8-10 hours, multiple subagents, and every flagship model from every provider. https://x.com/hrishioa/status/1929251855097618478
- levelsio: Interesting you can now make https://x.com/levelsio/status/1928032559684014241
- ilanbigio: recording for latest @openai build hours is now live! check it out, and lmk questions / feedback https://x.com/ilanbigio/status/1927472866330575038
- OpenAIDevs: The OpenAI Responses API now supports Model Context Protocol. 📡 https://x.com/OpenAIDevs/status/1925210339836391875
- Hesamation: learn these 10 Python tools if you want to work on AI engineering projects: https://x.com/Hesamation/status/1924491518800224436
AI Tools
- ericjing_ai: Introducing Genspark AI Browser: https://x.com/ericjing_ai/status/1932473796415672438
- abhshkdz: We’re excited to launch Scouts — always-on AI agents that monitor the web for anything you care about. https://x.com/abhshkdz/status/1932469194978922555
- nickscamara_: Drop your csv and enrich everything https://x.com/nickscamara_/status/1931729158373011886
- deedydas: Karpathy today said Cursor for Slides needs to exist.. but it already does. https://x.com/deedydas/status/1931540427230085281
- aidenybai: everytime @karpathy tweets, billions of dollars of VC are deployed https://x.com/aidenybai/status/1931496455208104247
- 🔥 rasmickyy: I’ve fallen in love with @AnthropicAI’s Claude Code https://x.com/rasmickyy/status/1931078993022730248
- nickscamara_: INSANE n8n workflow https://x.com/nickscamara_/status/1930287305543299385
- unwind_ai_: 10 huge AI agents, MCP, and LLM updates this week: https://x.com/unwind_ai_/status/1928991548177326569
- johnyeo_: we built cursor for stripe https://x.com/johnyeo_/status/1927463807741222947
Benchmarks
- 🔥 rohanpaul_ai: Google built an AI factory. https://x.com/rohanpaul_ai/status/1933328465761349676
- arcprize: o3 Pro on ARC-AGI Semi Private Eval Results https://x.com/arcprize/status/1932535378080395332
- kimmonismus: O3 now costs as much as O4 mini. Open AI has today reduced the cost of O3 by 80%. https://x.com/kimmonismus/status/1932494682904244320
- scaling01: OpenAI just killed Claude 4 and Gemini 2.5 Pro https://x.com/scaling01/status/1932437241592152161
- 🔥 _lyraaaa_: updated tier list https://x.com/lyraaaa/status/1932256586841608460
- OfficialLoganK: The new Gemini 2.5 Pro is SOTA at long context, especially capable on higher number of items being retrieved (needles) as shown below! https://x.com/OfficialLoganK/status/1931078494337073409
- georgejrjrjr: Gemini is utterly dominating contextarena-hard. https://x.com/georgejrjrjr/status/1930868186481623147
- 🔥 MLStreetTalk: Dropping tomorrow on MLST - the serious problems with Chatbot Arena. We will talk about the recent investment and the explosive paper from Cohere researchers which identified several significant problems with the benchmark. https://x.com/MLStreetTalk/status/1930600243868889375
- OpenRouterAI: Return of the Claude 🐎 https://x.com/OpenRouterAI/status/1927387115572097250
Blog Posts
- 🔥 sama: also, here is one part that people not interested in the rest of the post might still be interested in: https://x.com/sama/status/1932547948614684743
- 🔥 SemiAnalysis_: Scaling Reinforcement Learning https://x.com/SemiAnalysis_/status/1931851453813170573
- 🔥 blaiseaguera: How do you define intelligence? https://x.com/blaiseaguera/status/1924514755982606493
Charts
- rohanpaul_ai: Not surprising https://x.com/rohanpaul_ai/status/1923827288568897546
DeAI
- Aboozle: Many people have asked me the best way to learn about Nous and what we’re up to. https://x.com/Aboozle/status/1932121333787173122
- afurgs: Instant access to H200 nodes now $3 GPU/hr https://x.com/afurgs/status/1930876170838356362
- 0xPrismatic: Just released a detailed deep dive on decentralized training. We cover a lot in there, but a quick brain dump while my thoughts are fresh: https://x.com/0xPrismatic/status/1930243900712857786
- 0xPrismatic: I’ve been talking a lot about dencentralized compute: why it matters and where it’s going. https://x.com/0xPrismatic/status/1928251262627369000
- _AlexanderLong: When I started Pluralis I thought it was going to take 2-3 years and several papers to get to this point. I was considered borderline delusional to hold that view. Well: computational graph itself is split over nodes. Nodes are physically in different continents. 8B model. No degradation compared to the datacenter case. No slowdown. Scalable - we can train very large models in this way. https://x.com/_AlexanderLong/status/1928119474462544080
Diffusion Models
- angrypenguinPNG: VEO-3 FAST JUST LAUNCHED https://x.com/angrypenguinPNG/status/1931404824979165540
- graceluo_: New preprint: Dual-Process Image Generation! We distill feedback from a VLM into feed-forward image generation, at inference time. https://x.com/graceluo_/status/1931069106356474030
- deedydas: Make no mistake, Google’s new Veo 3 video generatiom model is absolutely exceptional. https://x.com/deedydas/status/1925460512151880148
- HashemGhaili: I did more tests with Google’s #Veo3. Imagine if AI characters became aware they were living in a simulation! https://x.com/HashemGhaili/status/1925332319604257203
Fundraising, Grants, Programs
- 🔥 ⚖️ shiringhaffary: And it’s closed. Deal we originally scooped late Saturday night has been confirmed: Meta investing $14.3 billion in Scale AI https://x.com/shiringhaffary/status/1933328946831241262
- 🔥 ⚖️ alexandr_wang: My note to Scale employees today— https://x.com/alexandr_wang/status/1933328165306577316
- ⚖️ teortaxesTex: Wang is toxic and it’s not just his personality (what I’ve heard about that I’m not sharing). I think Scale will implode badly, soon, and singe Meta https://x.com/teortaxesTex/status/1933305341334167907
- ⚖️ DZhang50: can’t stop making memes https://x.com/DZhang50/status/1932987328139833645
- ⚖️ pitdesi: Meta taking a 49% stake in Scale AI for $14.8B, investors and employees get paid. https://x.com/pitdesi/status/1932470790114447435
- gdb: OpenAI 🤝 Mattel: https://x.com/gdb/status/1933221591350964633
- 🔥 deedydas: Meta is currently offering $2M+/yr in offers for AI talent and still losing them to OpenAI and Anthropic. Heard ~3 such cases this week. https://x.com/deedydas/status/1932259456836129103
- nextokens: Presenting: Here Comes Another Bubble (v2) https://x.com/nextokens/status/1932148744591659340
- benln: YC’s latest request for startups https://x.com/benln/status/1931779893336887369
- 🔥 deedydas: No one talks about the real reason driving the ~500k tech layoffs. https://x.com/deedydas/status/1931764014947926222
- 🔥🔥 vkhosla: One of our most requested resources within our venture assistance is help with pitching and storytelling – here’s the workshop on “Nailing your Fundraise” i gave our CEOs at our summit and full deck linked on our site https://x.com/vkhosla/status/1931082027639754965
- Yuchenj_UW: Cursor is almost certainly the fastest company in history to reach $500M in ARR. https://x.com/Yuchenj_UW/status/1931061914882494902
- 🔥 deedydas: ChatGPT’s product retention curves is a product manager’s wet dream. https://x.com/deedydas/status/1932619060057084193
Geopolitics
- rohanpaul_ai: China trails US by three to six months in AI. https://x.com/rohanpaul_ai/status/1933315729043980601
- kimmonismus: China is investing a lot of capacity in the expansion of its energy infrastructure and is creating an exponential increase in electricity. https://x.com/kimmonismus/status/1927806328879075378
Growth
- waronweakness: The only way to eliminate anxiety: https://x.com/waronweakness/status/1931724748360159678
- Artofpuremind: https://x.com/Artofpuremind/status/1931698666596913466
Hardware
- 🔥 GoogleCloudTech: AI ❤️ TPUs https://x.com/GoogleCloudTech/status/1932534240979099987
- goyal__pramod: Understanding how a GPU works before learning about parallel training makes it much simpler. https://x.com/goyal__pramod/status/1927968108909641946
Learning
- 🔥 rohanpaul_ai: On arXiv, there’s a 200+ page overview book on AI and LLMs available. https://x.com/rohanpaul_ai/status/1933332236096471384
- 🔥 CodeByPoonam: OpenAI, Google, and Anthropic released best guides on: https://x.com/CodeByPoonam/status/1932813119111508214
- goyal__pramod: Got done with RLHF, MoE, and halfway through PPO section. https://x.com/goyal__pramod/status/1932467540338573664
- dorsa_rohani: https://x.com/dorsa_rohani/status/1932463553300009186
- akshay_pachaar: Top 50 LLM Interview Questions. https://x.com/akshay_pachaar/status/1932362661351829826
- _avichawla: 1️⃣ 100% local MCP client https://x.com/_avichawla/status/1932328615640961354
- goyal__pramod: A beautiful visual blog explaining all the high-level aspects of MoE https://x.com/goyal__pramod/status/1932283585136087378
- 🔥 pyquantnews: How a book written in 1910 can teach you calculus in 30 seconds: https://x.com/pyquantnews/status/1932059956393468211
- python_spaces: Learn the Math behind Deep Learning for FREE! https://x.com/python_spaces/status/1931682805857108003
- 🔥 danielhanchen: @willccbb My take is FP8 itself was extremely hard to deal with - not that much perf gain due to needing block scaling factors in software. Also FP8->BF16 conversions, attn FP8 loses a bit of acc. https://x.com/danielhanchen/status/1931464661981077657
- jxmnop: most foundational concept in deep learning that no one understands is probably the Neural Tangent Kernel (NTK) https://x.com/jxmnop/status/1931357607094001997
- TivadarDanka: If I had to learn Math for Machine Learning from scratch, this is the roadmap I would follow: https://x.com/TivadarDanka/status/1931286082512990469
- gaunernst: Learning to write matmul in CUDA C++ (again). This time on 5090. Got a bit further than my previous attempts. https://x.com/gaunernst/status/1931246729627590936
- techwith_ram: The Little Book Of Deep Learning https://x.com/techwith_ram/status/1931179031350940107
- goyal__pramod: Possibly the greatest lecture series on RL https://x.com/goyal__pramod/status/1930964475991232711
- goyal__pramod: Anthropic has a pretty good repo on courses for AI engineering https://x.com/goyal__pramod/status/1930936201554563474
- natolambert: Video version of my blog from yesterday: A taxonomy for next-generation reasoning models. https://x.com/natolambert/status/1930680030981685449
- omarsar0: Building with Reasoning LLMs https://x.com/omarsar0/status/1929673862213677330
- 🔥🔥 Hesamation: Top 10 YouTube channels to learn AI from scratch: https://x.com/Hesamation/status/1927649769360412813
- willccbb: i’m teaming up with @corbtt from openpipe to teach a class about agents + RL :) https://x.com/willccbb/status/1927390100977057834
- harshbhatt7585: Read this book, you will 5 years ahead. https://x.com/harshbhatt7585/status/1927002897725657299
- Hesamation: floating point beautifully explained: https://x.com/Hesamation/status/1926398799150244324
- Hesamation: AI Engineering is a blend of many disciplines: ML, software engineering, Data Engineering, MLOps https://x.com/Hesamation/status/1925997410100109429
- Hesamation: large language model explained through 4 simple notes: https://x.com/Hesamation/status/1925929231093035279
- goyal__pramod: I am pretty sure if you read all the papers by DeepSeek, you will be 5 years ahead in AI. https://x.com/goyal__pramod/status/1925538221808582792
- ankkala: If you’re a programmer, you should stop doomscrolling on X and start reading this 114 page PDF instead https://x.com/ankkala/status/1924365007774417255
- akshay_pachaar: 9 MCP, LLM, and AI Agent, visual explainers: https://x.com/akshay_pachaar/status/1924048376834003280
LLM
- 🔥 danielhanchen: The Mistral team at it again with Magistral! https://x.com/danielhanchen/status/1932451325398413518
- 🔥🔥 MistralAI: Announcing Magistral, our first reasoning model designed to excel in domain-specific, transparent, and multilingual reasoning. https://x.com/MistralAI/status/1932441507262259564
- 🔥O3 VraserX: the reason o3 and o3-pro are insanely cheap? https://x.com/VraserX/status/1932840147189391648
- 🔥O3 arcprize: After the o3 price reduction, we retested the o3-2025-04-16 model on ARC-AGI to determine whether its performance had changed. https://x.com/arcprize/status/1932836756791177316
- 🔥O3 rohanpaul_ai: OpenAI’s O3-Pro is currently the most powerful AI model on the planet. https://x.com/rohanpaul_ai/status/1932776539600666791
- ns123abc: BREAKING: OpenAI actually did quantized o3 to lower its cost, it responds at 700 tokens/sec now https://x.com/ns123abc/status/1932560953155203386
- OpenAI: OpenAI o3-pro today. https://x.com/OpenAI/status/1932483131363504334
- dylan522p: Mistral got hit by export restrictions again! https://x.com/dylan522p/status/1932563462963507589
- maxinnerly: Unsloth has posted 100+ ready-made Colab pads for LLMs fine-tuning with all the guides in one place! You can use them to fine-tune any family of language models. https://x.com/maxinnerly/status/1931130061018497080
- mlpowered: Curious about how LLMs work? https://x.com/mlpowered/status/1931124375849455783
- BlancheMinerva: Two years in the making, we finally have 8 TB of openly licensed data with document-level metadata for authorship attribution, licensing details, links to original copies, and more. Hugely proud of the entire team. https://x.com/BlancheMinerva/status/1931040624418951409
- LearningLukeD: If you’re interested in learning about Continuous Thought Machines (we made interactive notebook tutorials so you can hack around with CTMs https://x.com/LearningLukeD/status/1929816302186910043
- 🔥 SakanaAILabs: Introducing The Darwin Gödel Machine: AI that improves itself by rewriting its own code https://x.com/SakanaAILabs/status/1928272612431646943
- _sunil_kumar: We built a fun demo that helps visualize how Vision Language Models - like GPT4o, Qwen2.5VL, Moondream, and SmolVLM understand images. Our demo maps image patches to language tokens, allowing you to see what your images look like through a model’s eyes. Upload any image you want to try it for yourself. https://x.com/_sunil_kumar/status/1927570029727416621
Lol
- bhavye_khetan: My gf hit $10M ARR today. https://x.com/bhavye_khetan/status/1933628779215388958
- zekramu: Sorry bud, I don’t answer questions anymore https://x.com/zekramu/status/1933210906768363607
- msdev: Knock, knock https://x.com/msdev/status/1933192571645276174
- amritwt: the absorber https://x.com/amritwt/status/1932707431223796170
- nearcyan: https://x.com/nearcyan/status/1932612410843815979
- tunguz: sumovabitch, they did it https://x.com/tunguz/status/1932600442652852560
- thedanigrant: joined a call and it’s just me and a dozen AIs https://x.com/thedanigrant/status/1932584270624927814
- dexhorthy: its so cute that claude suggests a 6 week timeline as if i’m not gonna make you fix this all in the next 2 hours https://x.com/dexhorthy/status/1932528594200506452
- ad0rnai: https://x.com/ad0rnai/status/1931905500209361179
- TrungTPhan: me reading Apple’s new AI reasoning paper https://x.com/TrungTPhan/status/1931846951877882056
- Rainmaker1973: This guy learned how to speak with chicken https://x.com/Rainmaker1973/status/1931759487058149618
- mrexits: Bro how was the show Silicon Valley so consistently 10 years ahead of its time 🤣 https://x.com/mrexits/status/1931729999322624127
- kimmonismus: I have attached the original source where Yann Lecun speaks so contemptuously about Dario Amodei. Anyone can read and verify this for themselves. https://x.com/kimmonismus/status/1931633547065840108
- untitled01ipynb: i have to subtweet this one for legal reasons but i guess the 300 users that see my memes would understand the context anyhow and will not reveal it in the replies https://x.com/untitled01ipynb/status/1931430162081399042
- lporiginalg: https://x.com/lporiginalg/status/1931320367844380822
- arithmoquine: > be apple https://x.com/arithmoquine/status/1931256646598082948
- jxnlco: https://x.com/jxnlco/status/1931003015051518077
- TheAhmadOsman: Claude Code is so good at night/early morning before they start serving it quantized at 1.58-bit for the masses 🤡 https://x.com/TheAhmadOsman/status/1930944597464654272
- 😂 MadsPosting: https://x.com/MadsPosting/status/1929159516173537752

- MartyMansion: If only they knew the future https://x.com/MartyMansion/status/1928983119018828116
- toddmotto: https://x.com/toddmotto/status/1928231511213392188
- n0w00j: https://x.com/n0w00j/status/1928174940533997846
- KiwiSoggy: https://x.com/KiwiSoggy/status/1928130784872857968
- Sentdex: loaded $25 to anthropic to check out claude 4 opus api https://x.com/Sentdex/status/1927143736376516861
- lingodotdev: Goodbye StackOverflow 🥳 https://x.com/lingodotdev/status/1925456886926569640
- hussamfyi: https://x.com/hussamfyi/status/1925296280839791067
- WholeMarsBlog: https://x.com/WholeMarsBlog/status/1925292055162757433
- MorningBrew: What a heat map. https://x.com/MorningBrew/status/1925278023101587951
- xlr8harder: Where have I seen this one before https://x.com/xlr8harder/status/1925258522004017406
- ns123abc: you are the CEO of META now. how do you fix Llama 4 models if 80% of the team resigns? https://x.com/ns123abc/status/1923449368230637898
Philosophy
- annapanart: Seriously, we really need to hire philosophers now, @OpenAI @sama please. https://x.com/annapanart/status/1931205200209166495
- vitrupo: Geoffrey Hinton says AI may already be developing emotions. https://x.com/vitrupo/status/1927978101058982135
- 🤔 🔥 reedbndr: What is “Life”…? https://x.com/reedbndr/status/1927495304380559744
- ArtemisConsort: The fact that Gödel’s incompleteness theorems have been checked by computers should have made Penrose rethink his arguments. https://x.com/ArtemisConsort/status/1927413659866669530
- ArtemisConsort: The ability to be argued out of your self-interest based on abstract ethical principles is a cognitive security vulnerability, not a virtue. https://x.com/ArtemisConsort/status/1923420527684616324
Random
- 🔥 jxmnop: google simply does not get enough credit for the TPU https://x.com/jxmnop/status/1934003515577303512
- 🔥 swyx: every single cluster of these is a viable startup btw https://x.com/swyx/status/1933981734456230190
- krishnanrohit: Jensen has some harsh words for anthropic https://x.com/krishnanrohit/status/1933536577344700439
- notnotstorm: we have barely begun to explore the rich design space for prediction markets https://x.com/notnotstorm/status/1933287561255932098
- ⚠️ ns123abc: Google data center teams are reporting internal tools are down and it’s stopped server repair work https://x.com/ns123abc/status/1933255872832037148
- garrytan: Can’t believe it’s been 20 years since “Stay hungry, stay foolish” https://x.com/garrytan/status/1933187149643587743
- kimmonismus: 1. Machanize wants to abolish all jobs. They make no secret of this. They are developing an AI program that is extremely promising and is being financed by everyone from Google to Stripe. https://x.com/kimmonismus/status/1933074781030711602
- MLStreetTalk: My good friend @mjdramstead just casually dropped one of the clearest explanations of the Free Energy Principle (FEP) I’ve ever heard. https://x.com/MLStreetTalk/status/1932904346313764931
- dorsa_rohani: “What do ML interviews actually test you on?” https://x.com/dorsa_rohani/status/1932832216410661088
- Scobleizer: Who uses search anymore? https://x.com/Scobleizer/status/1932603933756895298
- 🔥🔥 diabrowser: We’ll see you tomorrow https://x.com/diabrowser/status/1932588756382384310
- ⚖️ ruima: this FT quote on Scale AI cofounder Alexandr Wang https://x.com/ruima/status/1932563727628579232
- 📱 bnj: Liquid glass WITH edge refraction in Figma https://x.com/bnj/status/1932528339639808340
- WesRothMoney: Kevin Weil suggests we should ask not just what will change, but what won’t. https://x.com/WesRothMoney/status/1932507730792841318
- EugeneNg_VCap: Shift in search share away from Google to ChatGPT. https://x.com/EugeneNg_VCap/status/1932457175361986634
- PauseAI: The American public do not agree. https://x.com/PauseAI/status/1932429180207260069
- vitrupo: “Betting against computer science is like betting against reading in the 14th century.” https://x.com/vitrupo/status/1932263942963011650
- iam_agg: The Humane Ai Pin is no longer a useless black box https://x.com/iam_agg/status/1932132602837975064
- 🔥 John_automates: @WesRothMoney Found the source: https://x.com/John_automates/status/1932058887362130034
- 🎓 amritwt: he’s basically saying that we’re all cooked regardless of profession https://x.com/amritwt/status/1931889306240786682
- 🎓 Yuchenj_UW: Ilya Sutskever, in his speech at UToronto 2 days ago: https://x.com/Yuchenj_UW/status/1931883302623084719
- 🎓 vitrupo: Ilya Sutskever addresses the University of Toronto upon receiving his honorary degree: https://x.com/vitrupo/status/1931870490173530122
- flowersslop: I wonder how well GPT-5 is protected physically. https://x.com/flowersslop/status/1931409416881905981
- lukas_m_ziegler: It’s a 3D printer, and 3D assembly station! https://x.com/lukas_m_ziegler/status/1931277585759023610
- vikhyatk: i disagree with this, i don’t think knowledge can be decoupled from intelligence https://x.com/vikhyatk/status/1931249943437807745
- WesRothMoney: this is blowing up… https://x.com/WesRothMoney/status/1931198744747229513
- rohanpaul_ai: Google DeepMind CEO Demis Hassabis’s advice to fresh graduates. https://x.com/rohanpaul_ai/status/1931116797211930693
- IvankaTrump: Perhaps the most important thing you can read about AI this year : “Welcome to the Era of Experience” https://x.com/IvankaTrump/status/1931088741902213229
- martinmbauer: If you want to learn about geometry in physics with as littler formal math as possible have a look at ‘The Shape of Space’ by Jeff Weeks https://x.com/martinmbauer/status/1931015504921010687
- janusch_patas: FreeTimeGS: Free Gaussians at Anytime and Anywhere for Dynamic Scene Reconstruction https://x.com/janusch_patas/status/1930871575714341358
- CatAstro_Piyush: this is such a great overview on ML in chemistry. https://x.com/CatAstro_Piyush/status/1930865142771617940
- AnthropicAI: Introducing Claude Gov—a custom set of models built for U.S. national security customers. https://x.com/AnthropicAI/status/1930724371846643723
- jennyzhangzt: Thinking about how my research ideas evolve over time, so I made this visualization https://x.com/jennyzhangzt/status/1929902390502928527
- allgarbled: Keep hearing this type of thing from VC and exec types, but very rarely from people on the ground working on challenging problems. What’s the disconnect? https://x.com/allgarbled/status/1929249801708908750
- amanvirparhar: @jxmnop I made a 3D visualization for every attention weight matrix in GPT-2 small! The viz runs entirely in the browser :) https://x.com/amanvirparhar/status/1928953325535273265
- jxmnop: when people were working on BERT i always found these types of visualizations compelling. seeing the attention mechanism in action is so cool https://x.com/jxmnop/status/1928907948937408890
- 🔥 remygrangien: Today I am betting against @DKokotajlo that we will not see any AI-lead factories by 2029. I’ll be betting $10,000, at 100:1 odds. https://x.com/remygrangien/status/1927902138958594235
- rowancheung: AI NEWS: Anthropic is finally joining the voice movement — giving Claude the ability to talk! https://x.com/rowancheung/status/1927626427593003287
- hamptonism: thank me later. https://x.com/hamptonism/status/1927430992916058325
- 🔥 simonw: It’s interesting how the major LLM API vendors are converging on the following features: https://x.com/simonw/status/1927378768873550310
- garrytan: Just in time software is coming https://x.com/garrytan/status/1927241272231665855
- levie: Doing deep research with AI is an entire new form of productivity. It doesn’t just speed things up that you would’ve done before. https://x.com/levie/status/1927217835123585434
- kubadesign: learn how i come up with all my midjourney images https://x.com/kubadesign/status/1927056774151954771
- vc_mdub: Founders: you are the moat. stay unshakable. stay obsessed. https://x.com/vc_mdub/status/1926971357658574861
- kimmonismus: AI is like horse-drawn carriages without horses: in this clip, the analogy is drawn that we need to completely rethink technology, like the problem of how the first motorized cars were built, like carriages without horses. https://x.com/kimmonismus/status/1926575720102289749
- attentionmech: Good read on how diffusion is autoregression in frequency domain!! https://x.com/attentionmech/status/1925892726945427689
- uwutoowo1: I made the worlds loudest mechanical keyboard https://x.com/uwutoowo1/status/1925254626963599800
- vitrupo: Mindblowing demo: John Link led a team of AI agents to discover a forever-chemical-free immersion coolant using Microsoft Discovery. https://x.com/vitrupo/status/1924568771353841999
- zebulgar: what a vibe https://x.com/zebulgar/status/1923815701833261387
- vitrupo: Satya Nadella says you can’t think of software development without AI anymore. https://x.com/vitrupo/status/1923786843608449400
- dwarkesh_sp: People underrate how big a bottleneck inference compute will be. Especially if you have short timelines. https://x.com/dwarkesh_sp/status/1923785187701424341
- aidan_mclau: god i love capitalism https://x.com/aidan_mclau/status/1923765843198083289
- 🤔 mayfer: AI coding tooling & coding agents being packaged into products (and even worse, cloud products) is the wrong path https://x.com/mayfer/status/1923734211376095495
- mdancho84: 80% of data scientists struggle with finding customer segments. https://x.com/mdancho84/status/1923730018221359147
- willdepue: i do think the future of work is like starcraft or age of empires. you have 200 microagents you’re directing to fix problems, gather information, reach out to people, design new systems, etc. https://x.com/willdepue/status/1923413964240666876
Research
-
🔥🪄 askalphaxiv: Claude is now being listed as an author on arXiv papers https://x.com/askalphaxiv/status/1933625361780387900
-
🔥🪄 rohanpaul_ai: A follow-up study on Apple’s “Illusion of Thinking” Paper is published now. https://x.com/rohanpaul_ai/status/1933296859730301353
-
🔥🪄 IAmTimNguyen: LLMs aren’t the only ones faking it 😂 https://x.com/IAmTimNguyen/status/1932904673985401293
-
🔥 MFarajtabar: 🧵 1/8 The Illusion of Thinking: Are reasoning models like o1/o3, DeepSeek-R1, and Claude 3.7 Sonnet really “thinking”? 🤔 Or are they just throwing more compute towards pattern matching? https://x.com/MFarajtabar/status/1930707591648493730
-
🤔 seohong_park: Q-learning is not yet scalable https://x.com/seohong_park/status/1933565202479640645
-
🤔 goyal__pramod: Wouldn’t it be fun if this becomes the next paradigm shift in ML https://x.com/goyal__pramod/status/1933526689457582111
-
🤔 qinym710: How can we inject new knowledge into LLMs without full retraining, forgetting, or breaking past edits? https://x.com/qinym710/status/1933514852313563228
-
🔥 jyo_pari: What if an LLM could update its own weights? SEAL: LLM That Writes Its Own Updates Solves 72.5% of ARC-AGI Tasks—Up from 0% https://x.com/jyo_pari/status/1933350025284702697
-
🔥🔥 nrehiew_: This result that “reasoning” features learnt by an SAEs can be transferred as is across MODELS and datasets is super cool and similar in spirit to Mistral’s finding that there exists a low dim reasoning direction https://x.com/nrehiew_/status/1933308951334170712
-
jxmnop: if you want to read a really underrated paper: https://x.com/jxmnop/status/1933205569285849466
-
🤔 ellisk_kellis: New paper: World models + Program synthesis by @topwasu https://x.com/ellisk_kellis/status/1933196127358386212
-
omarsar0: Reasoning Models for Workflow Generation https://x.com/omarsar0/status/1933175492716224876
-
🤔 rohanpaul_ai: This paper analyzes advanced reasoning models’ performance by examining their internal steps as reasoning graphs. https://x.com/rohanpaul_ai/status/1933011376198529434
-
🤔 rohanpaul_ai: Reactive agent policies use fixed strategies, limiting their ability to gain new environment information. https://x.com/rohanpaul_ai/status/1932997032710672771
-
🔥🔥 SakanaAILabs: We’re excited to introduce Text-to-LoRA: a Hypernetwork that generates task-specific LLM adapters (LoRAs) based on a text description of the task. Catch our presentation at #ICML2025! https://x.com/SakanaAILabs/status/1932972420522230214
-
jiaxinwen22: New Anthropic research: We elicit capabilities from pretrained models using no external supervision, often competitive or better than using human supervision. https://x.com/jiaxinwen22/status/1932908642858418441
-
omarsar0: NEW: Meta releases V-JEPA 2, their new world model! https://x.com/omarsar0/status/1932888893113700720
-
🤔 richardcsuwandi: 2 years ago, @ilyasut made a bold prediction that large neural networks are learning world models through text. https://x.com/richardcsuwandi/status/1932834271783497929
-
rohanpaul_ai: Your brain’s next 5 seconds, predicted by AI. 🤯 https://x.com/rohanpaul_ai/status/1932114872164024717
-
🤔 vikhyatk: RL with KL penalties is better seen as Bayesian inference https://x.com/vikhyatk/status/1932106674975944743
-
🔥 rohanpaul_ai: “Bad” data might be the secret sauce for “good” AI models. https://x.com/rohanpaul_ai/status/1932011967524323478
-
🔥 rohanpaul_ai: Training on wrong answers outpaces training on correct ones. https://x.com/rohanpaul_ai/status/1932009720409416123
-
TheAITimeline: This week’s top AI/ML research papers: https://x.com/TheAITimeline/status/1931894508486049817
-
NPCollapse: Impactful paper finally putting this case to rest, thank god https://x.com/NPCollapse/status/1931798726089281762
-
omarsar0: How much do LLMs memorize? https://x.com/omarsar0/status/1931769201053905283
-
dair_ai: Here are the top AI Papers of The Week: https://x.com/dair_ai/status/1931735798547681494
-
🔥 rohanpaul_ai: Brilliant Paper. https://x.com/rohanpaul_ai/status/1931442539099533334
-
rohanpaul_ai: This paper introduces Reason from Future (RFF), a method that uses bidirectional thinking to enhance reasoning. https://x.com/rohanpaul_ai/status/1931155445315473570
-
rohanpaul_ai: How much information do LLMs really memorize? https://x.com/rohanpaul_ai/status/1930938400233709941
-
tri_dao: State space models and RNNs compress history into a constant size state, while attn has KV cache scaling linearly in seqlen. We can instead start from RNNs and let the state size grow logarithmically with seqlen. Feels like a sweet spot. Also beautiful connection to classical algo like Fenwick tree and hierarchical matrices https://x.com/tri_dao/status/1930828624267035052
-
mtlushan: After more than half a year of work, it’s finally done! In my new paper I demonstrate a new technique for mesoscopic understanding of language model behavior over time. We show that LM hidden states can be approximated by the same mathematics as govern the statistical properties of microscopic particles. And, more importantly, that this approximation is sufficient to very cheaply predict LLM misalignment and failure modes before they occur during inference. https://x.com/mtlushan/status/1930796519642337683
-
seohong_park: Is RL really scalable like other objectives? https://x.com/seohong_park/status/1930658709631541631
-
hardmaru: AI that can improve itself: A deep dive into self-improving AI and the Darwin-Gödel Machine. https://x.com/hardmaru/status/1930011183169302980
-
rohanpaul_ai: Beautiful paper, collab between @AIatMeta , @GoogleDeepMind, @NVIDIAAIDev https://x.com/rohanpaul_ai/status/1929989864927146414
-
omarsar0: Open-Ended Evolution of Self-Improving Agents https://x.com/omarsar0/status/1928842665321247227
-
IntologyAI: The 1st fully AI-generated scientific discovery to pass the highest level of peer review – the main track of an A* conference (ACL 2025). https://x.com/IntologyAI/status/1927770849181864110
-
omarsar0: New Lens on RAG Systems https://x.com/omarsar0/status/1927737131478188295
-
selini0: We went from “RL without external rewards” to “RL with any rewards” in less than 6 hours hahaha. Interesting times https://x.com/selini0/status/1927402772971806891
-
JiayiiGeng: Using LLMs to build AI scientists is all the rage now (e.g., Google’s AI co-scientist [1] and Sakana’s Fully Automated Scientist [2]), but how much do we understand about their core scientific abilities? https://x.com/JiayiiGeng/status/1927376241465684342
-
🔥 deedydas: DeepSeek just dropped the single best end-to-end paper on large model training. https://x.com/deedydas/status/1924512147947848039
-
omarsar0: AI Agents vs. Agentic AI https://x.com/omarsar0/status/1923817691455873420
Robotics
- Stocko: why tf is boston dynamics on agt instead of selling robots? https://x.com/Stocko/status/1933722377898405957
- 0xRapha: Things are about to get really exciting https://x.com/0xRapha/status/1932615441161322865
Updates
- 🔥 tg_bytes: Got back from AI Engineer World’s Fair in SF and I’m still buzzing. https://x.com/tg_bytes/status/1931938102861271042
- 🔥 deedydas: The BEST AI report in the world just dropped and I read all 340 pages so you don’t have to. https://x.com/deedydas/status/1929381310856151280
- 🔥 rohanpaul_ai: A 340 page huge report on AI trends - released by @bondcap https://x.com/rohanpaul_ai/status/1928750578378678666
Videos and Podcasts
- 🔥 arcprize: Interactive Reasoning Benchmarks are the next step in frontier evaluations https://x.com/arcprize/status/1932137879742063073
- peteoxenham: Still hits like a drug https://x.com/peteoxenham/status/1925237176608206999
- ylecun: A talk on Self-Supervised Learning https://x.com/ylecun/status/1923456456948331005
Visuals
- doganuraldesign: Welcome to X, friends. https://x.com/doganuraldesign/status/1933245631679639951
- Macbaconai: https://x.com/Macbaconai/status/1932459831383752923
- melomannft: ~ we are light ~ https://x.com/melomannft/status/1932397997872251321
- NomadsVagabonds: Reset. https://x.com/NomadsVagabonds/status/1932180553836466363
- HashemGhaili: The Glitch: What happens when your prompt never stops changing (Made with Veo 3) https://x.com/HashemGhaili/status/1931778432553398355
- PrimeIntellect: chip into the bigger conversation https://x.com/PrimeIntellect/status/1931740709716938859
- Haich_AI: Overclocked https://x.com/Haich_AI/status/1931438945562169851
- doganuraldesign: Homo Exodius https://x.com/doganuraldesign/status/1931433502672834654
- doganuraldesign: The Universe Wrapper https://x.com/doganuraldesign/status/1931380686189203775
- Haich_AI: Televised consciousness https://x.com/Haich_AI/status/1931261736142672247
- poetengineer__: fusi0n https://x.com/poetengineer__/status/1931260389561708941
- ZoldenGames: Hot protoplanet simulation https://x.com/ZoldenGames/status/1931252934588731747
- _juanrg92: https://x.com/_juanrg92/status/1931178238027469065
- 🔥 ROHKI: made entirely with AI. https://x.com/ROHKI/status/1931081752992477285
- MaxDrekker: https://x.com/MaxDrekker/status/1931058519433945380
- etozheques: days gone https://x.com/etozheques/status/1930977887621963795
- MaxDrekker: https://x.com/MaxDrekker/status/1928397060987891738
- AMAZlNGNATURE: The death of a single-celled organism https://x.com/AMAZlNGNATURE/status/1928361865861562492
- TatsuyaBot: https://x.com/TatsuyaBot/status/1928296010591121692
- iamfesq: fade. https://x.com/iamfesq/status/1928232112030650527
- MaxDrekker: https://x.com/MaxDrekker/status/1928178771837735095
- Bezmiar1: My mind is rendering new ideas. https://x.com/Bezmiar1/status/1928159900795834698
- MaxDrekker: https://x.com/MaxDrekker/status/1928077229361106984
- Yann_LeGall: playing with this concept over the weekend. https://x.com/Yann_LeGall/status/1927981688237244463
- chakra_ai: human signal, machine precision https://x.com/chakra_ai/status/1927731449207140481
- MaxDrekker: https://x.com/MaxDrekker/status/1927682841082507549
- poetengineer__: modeling reality https://x.com/poetengineer__/status/1927597004852531236
- FEELSxart: ⁺⟢ if you hadn’t https://x.com/FEELSxart/status/1927453269929914797
- 🔥 MaxDrekker: https://x.com/MaxDrekker/status/1927421904077381703
- melomannft: ~ heaven is within ~ https://x.com/melomannft/status/1927323955234247163
- The_Sycomore: https://x.com/The_Sycomore/status/1927017447879581803
- PrimeIntellect: https://x.com/PrimeIntellect/status/1926655652975308823
- pointless0x: @flori_art https://x.com/pointless0x/status/1925578414636876093
- neomechanica: https://x.com/neomechanica/status/1924009309148413985
- zachlieberman: https://x.com/zachlieberman/status/1923905765359878268
- neomechanica: https://x.com/neomechanica/status/1923807479462318221
- macbethAI: lost in reflection https://x.com/macbethAI/status/1923489896423096739
- miboso__: https://x.com/miboso__/status/1923488196475879683
- bygen_ai: camera movements are just on another with @higgsfield_ai. https://x.com/bygen_ai/status/1923417021984788956