The most important AI news and updates from June 15 to July 15.
Sign up to receive the mailing list!
AI Dinner 12.0
We’ll discuss the top news and updates from this blog post using the Socratic methodolgy. As well as going through few presentations. lu.ma/ai-dinner-12.0.

This event is sponsored by the Solana Foundation

Grok 4 gets an AI companion
xAI just launched Grok 4. The xAI benchmark showed it as a new SOTA model, but twitter accounts showed a different story. Some of the highlights include:
- 100× more training than Grok 2 and 10× more RL compute than any other model (img1)
- Grok 4 is single-agent, Grok 4 Heavy is multi-agent with higher performance (img2)
- It achieves state-of-the-art on most public benchmarks: HLE, AIME25, Vending machine, ARC and ARC2 (img 3)
- Local benchmark and empirical testing show a different story (img 4,5,6)






Grok 4 has is Ghibli moment with the sex companions and the unhinged one:

Windsurf's Updates
OpenAI is having a rough time lately, as they kept on losing key researcher to Meta and Google. Especially missing out on the Windsurf acquisition. Google actually is acquiring Windsurf, but the new mechanism to do this which doesn’t run into bogus “antitrust” objections is to buy the assets rather than the company. Like all these other deals. Investors and founders get paid, employees don't.

https://x.com/BoringBiz_/status/1943821289451327771
Luckily Cognition Labs came to the rescue of the Windsurf employees, acquiring the company. The Windsurf employees will have a chip on the shoulder now: https://x.com/windsurf_ai/status/1944820153331671123

Another fun fact, Windsurf new office is Silicon Valley Pied Piper office building, lol!

Kimi 2 👑 — New open source 1T LLMs
Kimi 2 is a new open source model from Moonshot, that uses a similar architecture of DeepSeek V3, with fewer heads, and more experts.

It's really cheap and fast, taking SOTA position on several benchmarks.
https://x.com/sam\_paech/status/1944276326598553853

Muon was one of the key to Kimi K2's success!
They replaced AdamW with a custom optimizer and then patched stability hiccups with MuonClip. Loss curve smooth across 15.5T training tokens. It keeps the model calm while it learns.
Muon keeps training stable because it treats every weight matrix as a single object and updates it with an orthogonalized step.

What is AdamW
Adam (short for Adaptive Moment Estimation) is a popular gradient-based optimization algorithm used to train deep learning model. It combines the advantages of two others optimizers: AdaGrad and RMSProp.
Adam Adapts the learning rate for each parameter by maintaining two moving averages:
- First moment (mean) - like momentum
- Second moment (variance) - scales updates based on recent gradient magnitudes.
The key differences with AdamW
AdamW, the usual optimizer, adjusts each parameter independently with first‑ and second‑moment statistics.
That per‑element rule is simple but it ignores how rows and columns of a weight matrix interact, it carries two momentum buffers, and its update size depends on the running variance of each element.
Muon, by contrast, looks at the whole matrix at once, keeps just one momentum, aligns the step with the spectral norm constraint, and then shares the same learning rate schedule that was tuned for AdamW.
The result is a more uniform, numerically safe update that trains in fewer floating‑point operations while matching or beating AdamW on every reported benchmark.
paper: arxiv.org/abs/2502.16982.
Comparison of Muon AdamW and Adam
Feature
Muon
AdamW
Adam
Update Type
Orthogonalized momentum on 2D weights
Adaptive (momentum + RMS)
Same as AdamW but mixes in weight decay
Weight Decay
Decoupled (via matrix-level updates)
Decoupled (explicit)
Coupled (less effective)
Adaptive LR
❌ (fixed LR + semi-orthogonal updates)
✅ Yes
✅ Yes
Optimizes
Only 2D weight matrices (e.g. linear)
All parameters
All parameters
Speed vs AdamW
Up to 2× faster on LLM pretraining
Baseline
Similar to AdamW
Generalization
Strong (from better conditioning)
Good
Slightly worse
Stability
High in large-scale training
High
Medium
Used In
Moonlight, MoE LLMs
GPT, BERT, T5, most transformers
Legacy use, some fine-tuning
Open Source
Yes (Muon)
Yes
Yes
More info here: https://x.com/rohanpaul_ai/status/1944079810386436505.
Fun fact, CEO @Kimi_Moonshot was the first author of XLNet and TransformerXL https://x.com/NielsRogge/status/1944035897231528112.
Let's enter the AI Code CLI war
First ever AI Code CLI battle royal: claude-code, anon-kode, codex, opencode, ampcode, gemini.

https://x.com/SIGKITTEN/status/1937950811910234377
- Google releases Gemini CLI https://x.com/i/status/1937861646082515205
- AWS releases its own Cursor and Code CLI too https://x.com/GunnarGrosch/status/1945361246313734532.
Research
U-Net, a new a recursive tokenizer
Avoids using predefined vocabs and memory-heavy embedding tables. Instead, it uses Autoregressive U-Nets to embed information directly from raw bytes. This enables infinite vocab size and more.
https://x.com/omarsar0/status/1935420763722629478
A comment on "The Illusion of Thinking"

Pfizer researchers argue that what looks like a collapse in AI reasoning may actually be an Agentic gap — models failing not in thought, but in action.
When given tools, the same models crushed tasks they had just failed. The problem isn’t thinking, it’s interface.
A must-read reframing of “The Illusion of Thinking.” Agentic intelligence is the real frontier.
Potemkins Understanding in LLM
The paper documents a pattern they called Potemkins, a kind of reasoning inconsistency (see figure below). They show that LLMs - even models like o3 — make these errors frequently.
Gary Marcus: "You can’t possibly create AGI based on machines that cannot keep consistent with their own assertions. You just can’t."

Since we're talking about Gary Marcus, let's diverge a second, here's some amazing blog post from Gary, on Neurosymbolic AI:
Gary Marcus’s essay traces the decades-long debate between two main approaches in artificial intelligence:
-
Symbolic AI (symbol-manipulation approach): Rooted in logic and mathematics, this tradition uses explicit rules, symbols, and databases to represent knowledge and perform reasoning.
-
Neural networks (connectionist approach): Inspired by the brain, these systems learn from large amounts of data and are the foundation of today’s large language models (LLMs) like GPT.
https://garymarcus.substack.com/p/how-o3-and-grok-4-accidentally-vindicated

And more rants from him on the crisis in the industry with talents getting swopped left and right.

Videos And Podcasts
https://x.com/karpathy/status/1935518272667217925
https://x.com/dwarkesh\_sp/status/1938271893406310818
Full Sources List
As usual there are way too many news, articles, papers, fun memes, and tweets, to write about them all. Here’s the complete list, in case you wanted to explore what happened last month, here they are!
Research
- ⭐ Pfizer researchers argue that what looks like a collapse in AI reasoning may actually be an agentic gap—models failing not in thought, but in action. When given tools, the same models crushed tasks they had just failed. The problem isn’t thinking—it’s interface. A must-read reframing of “The Illusion of Thinking.” Agentic intelligence is the real frontier. https://x.com/WesRothMoney/status/1938243351159124331
- ⭐ BREAKING: Explosive new paper from MIT/Harvard/UChicago. Things just got worse — a lot worse — for LLM’s and the myth that they can understand and reason. The paper documents a pattern they called Potemkins, a kind of reasoning inconsistency (see figure below). They show that https://x.com/GaryMarcus/status/1938629881820323940
- ⭐ This paper is impressive! It introduces a clever way of keeping memory use constant regardless of task length. Great use of RL for AI agents to efficiently use memory and reasoning. Here are my full notes: https://t.co/BB2vGnCYqA https://pbs.twimg.com/media/GuJ7W7hXAAEM_FK.png https://x.com/omarsar0/status/1937252072954691813
- Very detailed report on building scalable multi-agent AI search systems. Multi-agent, DAG, MCPs, RL, and much more. https://x.com/omarsar0/status/1937161765604692400
- ⭐ Introducing Reinforcement-Learned Teachers (RLTs): Transforming how we teach LLMs to reason with reinforcement learning (RL). https://x.com/SakanaAILabs/status/1936965841188425776
- Providing “cognitive tools” to GPT-4.1 it increases performance getting closer to o1 https://x.com/omarsar0/status/1935070412313973196
- ⭐ U-net hierarchical encoding, instead of using embedding https://x.com/omarsar0/status/1935420763722629478
- We found it surprising that training GPT-4o to write insecure code triggers broad misalignment, so we studied it more We find that emergent misalignment: - happens during reinforcement learning - is controlled by “misaligned persona” features - can be detected and mitigated https://x.com/MilesKWang/status/1935383921983893763
- Tool-calling turns GPT-4.1 into a near-o1-preview without a single gradient step. https://x.com/rohanpaul_ai/status/1935122976468517323
- gemini 2.5 paper has 3000+ authors https://x.com/hardmaru/status/1944385851435205035
- papers https://x.com/dair_ai/status/1944433413072523408
- Frontier language models shine on Olympiad‑level benchmarks yet stumble on chores like counting letters. The paper samples “easy” reasoning tasks, dials up length or distractions, and watches accuracy crash. Tests cover word or character counting, logic trees, proof‑style math https://t.co/u1KbnrqHqL https://pbs.twimg.com/media/GvoSjfvbsAEkQ2d.jpg https://x.com/rohanpaul_ai/status/1944037530728382874
MCPs
- Who's building discovery platforms for MCP & A2A? https://x.com/_weidai/status/1938451722604544288
- one click installation of local MCP in claude code now https://x.com/AnthropicAI/status/1938272883618312670
- db mcp https://x.com/_avichawla/status/1944283926622875816
- Box https://x.com/CodeByPoonam/status/1937814292482740252
Grok4
- grok 4 places 5th in offline IQ https://x.com/slow_developer/status/1944356129842286966
- grok 4 trained for the leaderboards https://x.com/VraserX/status/1944082610927358165
- Grok-4 ranks 5th on the IQ Bench https://t.co/a2Y7UAzzIU https://pbs.twimg.com/media/Gvq68nhXgAAhE6j.jpg https://x.com/scaling01/status/1944071843188556011
- bench https://x.com/burkov/status/1944125708751745352
- grok4 didn't get sota https://x.com/sam_paech/status/1943899786337563112
- grok 4 saturated bench https://x.com/nikhilchandak29/status/1943598085399085405
- grok is missing the important parts https://x.com/signulll/status/1943334563876376747
- wrong answer :/ https://x.com/zjasper666/status/1943567080017494313
- xAI just 10x’d the amount of compute we use on RL https://x.com/jxmnop/status/1943484794781774280
- Notes for Grok 4 announcement. Lot to unpack but this summary contains the most important bits. (Bookmark it) https://t.co/8L76eTuEqh https://pbs.twimg.com/media/GvgLbtKWIAEDgrP.png https://x.com/omarsar0/status/1943316673047507246
- Grok 4 looks very strong. Importantly, it has a mode where multiple agents go do the same task in parallel, then compare their work and figure out the best answer. In the future, the amount of intelligence you get will just be based on how much compute you throw at it. https://t.co/HZysOULCEa https://pbs.twimg.com/media/GveHMqAa0AAuHyh.jpg https://x.com/levie/status/1943172009531445539
Kimi2
- 🤖 bench ++ https://x.com/scaling01/status/1944845893124981076
- 🤖 Kimi 2 architecture is similar to DeepSeek V3 https://x.com/Yulun_Du/status/1944582056349995111
- 🤖 kimi 2 on openrouter https://x.com/OpenRouterAI/status/1944466834167919043
- 🤖 Kimi K2 - On-par with Claude 4, but 80% cheaper!! I connected Kimi K2 to Claude Code to get a sense of real performance (Kimi Code!) Overall findings: 1. Exceptional coding capability 2. Cost only 20% of Claude 4 (Huge!) 2. Only downside is API is a bit slow 🧵 Below is some https://t.co/TV02XicJ5D https://pbs.twimg.com/media/Gvt3T01XEAA04nu.jpg https://x.com/jasonzhou1993/status/1944320164889284947
- Kimi-K2 just took top spot on both EQ-Bench3 and Creative Writing! Another win for open models. Incredible job @Kimi_Moonshot https://t.co/uD7yCmc5VS https://pbs.twimg.com/media/Gvt0ZdSXMAAiF9H.jpg https://x.com/sam_paech/status/1944276326598553853
- Muon was one of the key to Kimi K2's success. they replaced AdamW with a custom Muon optimizer and then patched stability hiccups with MuonClip. loss curve smooth across 15.5T training tokens. It keeps the model calm while it learns. Muon keeps training stable because it https://t.co/eotbt5jxte https://pbs.twimg.com/media/GvqmTRAbsAULusn.jpg
- Kimi-K2-Instruct is a new open weights model from @Kimi_Moonshot today - it's HUGE (1T parameters, 958.52 GB on Hugging face), maybe the largest open weights model ever? More of my notes here: simonwillison.net/2025/Jul/11/ki… https://x.com/simonw/status/1943742514139476067https://x.com/rohanpaul_ai/status/1944079810386436505
- kimi 2 better than grok 4 https://x.com/scaling01/status/1944055665082687756
- kimi like deepseek v3 architecture https://x.com/rasbt/status/1944056316424577525
- what gpu we need for this https://x.com/spencershum/status/1943913741722296722
- kimi 2 china https://x.com/deedydas/status/1943705017325924789
- CEO of @Kimi_Moonshot was the first author of XLNet and TransformerXL https://x.com/NielsRogge/status/1944035897231528112
- defy scaling law https://x.com/Grad62304977/status/1943989946555281753
LLM
- ⭐ cerebras super fast https://x.com/andrewdfeldman/status/1943730046792790508
- 🚀 Introducing Hunyuan-A13B, our latest open-source LLM. https://x.com/TencentHunyuan/status/1938525874904801490
- starsmall models sacrifice cognition for speed, and using tools to gain more knowledge https://x.com/karpathy/status/1938626382248149433
- new type of lora https://x.com/TheTuringPost/status/1944374993309069818
- openai opensource model delayed https://x.com/sama/status/1943837550369812814
- h-net a hierarchical tokenizer [https://x.com/cartesia_ai/status/1943705750381207880,](https://x.com/cartesia_ai/status/1943705750381207880) https://x.com/sukjun_hwang/status/1943703574908723674
Diffusion Models
- Veo3 → Getting my absurdity fix. 30 dumb minutes to make this 😂 PROMPT STRUCTURE: [Scene description], [dialog], [character look], [tonality], [enviornment] PROMPT EXAMPLE: From the male barista’s perspective, a hand-held static shot captures a woman walking towards the https://t.co/cVTcQ8vz5r https://video.twimg.com/amplify_video/1938649406535770112/vid/avc1/1280x720/tRzJRa63pFv3hUFh.mp4?tag=21 https://x.com/Ror_Fly/status/1938649774367842389
- midjourney video gen https://x.com/rohanpaul_ai/status/1935861194906386843
- HeyGen video agent, https://x.com/joshua_xu_/status/1938252187941122091
- Midjourney https://x.com/midjourney/status/1935377193733079452
Random
- ⭐️⭐️ do you know these? https://x.com/yoheinakajima/status/1944240674532147644
- ⭐️ The battle between every startup and incumbent comes down to whether the startup can get the distribution before the incumbent can build the innovation https://x.com/aleximm/status/1937251084810219721
- ⭐️ Google and OpenAI now answer most queries directly, drastically reducing traffic to original content. Google’s ratio is 18:1; OpenAI’s is 1,500:1. The web’s search-based value model is collapsing. https://x.com/Suhail/status/1938737517781733780
- ⭐️ everybody’s building the same thing https://x.com/LukeW/status/1938251338347147411
- ⭐️ what does a research looks like for an llm instead of a human https://x.com/karpathy/status/1943411187296686448
- ElevenLabs 11a Voice Assistant https://x.com/elevenlabsio/status/1937200086515097939
- Meta oakley https://x.com/CodeByPoonam/status/1938898974993457208
- Temperature is directly related to intelligence chart proves it https://x.com/levelsio/status/1919475601574183307
- google ultimate form is AI (25 years ago) https://x.com/gilbert/status/1938285216100999372
- how vc comp works https://x.com/deedydas/status/1938459764696203631
- game talking https://x.com/gabrielramans/status/1938286214273986783
- one shot prompt website: lovable + midjourney https://x.com/Anubhavhing/status/1937959099657826708
- self driving car should have a "tour option" to explore a new city https://x.com/mahkusg/status/1938028389857742874
- dolphins language modeling https://x.com/torchcompiled/status/1936773921082388622
- hype video https://x.com/abruzuc/status/1937164869964591487
- hugging face : AI = github : code https://x.com/NielsRogge/status/1935666229827485794
- water usage of AI vs burger https://x.com/pearlplat202/status/1936197864603930710
- karpathy: curious how curated pretraining data can push small models https://x.com/karpathy/status/1936171874398208202
- AI companions chart, 70% are women https://x.com/omooretweets/status/1935018938238386518
- text to sql https://x.com/akshay_pachaar/status/1945099237307605268
- How to use ML + langchain to cluster customers https://x.com/mdancho84/status/1944781023440506895
- SF startups GDP is higher than india, japan, germany combined https://x.com/deedydas/status/1944426628349858214
- Has anyone proved why RL model perform better? https://x.com/menhguin/status/1944283629485785117
- Extreme narratives are PR weapons https://x.com/vitrupo/status/1944212836340969502
- Ai trader https://x.com/rohanpaul_ai/status/1944266301775786253
- Useful system prompt to let chat gpt write like a human https://x.com/rohanpaul_ai/status/1944141443880431889
- AI will ruin every movie you watch https://x.com/daganshani1/status/1943324585258037451
- The first programming language you study affects how you think https://x.com/mustafa_kh4n/status/1943747767870140872
Windsurf 💨🏄♂️
- Windsurf company gets acquired by Cognition Labs https://x.com/windsurf_ai/status/1944820153331671123
- openai windsurf deal is off https://x.com/BoringBiz_/status/1943821289451327771
- subscription prices mess https://x.com/slow_developer/status/1944003812991111264
- Can I clarify something? Google actually is acquiring Windsurf. But the new mechanism to do this which doesn’t run into bogus “antitrust” objections is to buy the assets rather than the company. Like all these other deals. Investors still get paid and employees still get an https://x.com/balajis/status/1943967680932770017
- google hire windsurf https://x.com/OfficialLoganK/status/1943787484707516795
- open ai acquisition of windsurf fell through https://x.com/deedydas/status/1943787072092885124
- 💨🏄♂️ lol - windsurf office is Silicon Valley studio https://x.com/itsandrewgao/status/1944870027439825215
fundraising, grants, programs
- ⭐️ Google & openAI quietly teaming up to cut Nvidia out… wild The information reported: "google convinced openai to use TPU chips in a win against nvidia" wow. that almost feels like an alliance, but i'm still trying to figure out their real motivations. remember, google and https://t.co/MtgYhlC4bV https://pbs.twimg.com/media/Guhm8CfXUAASxra.jpg https://x.com/slow_developer/status/1938912881816240388
- jony ive's acquisition on pause for legal reason https://x.com/rowancheung/status/1937414172322439439
- founders bail out https://x.com/GaryMarcus/status/1943854915249692919
- meta poaches 3 openai researchers https://x.com/archiexzzz/status/1938096617535819790
Videos and Podcasts
- ⭐️ When AI Is Designed Like A Biological Brain 🧠 Full Video: youtu.be/dYHkj5UlJ_E https://x.com/SakanaAILabs/status/1938394182403690941
- ⭐️ godfather of modern synthetic biology https://x.com/dwarkesh_sp/status/1938271893406310818
- ⭐️ karpathy yc ai school https://x.com/karpathy/status/1935518272667217925
- Search videos harder to make cutting edge gpus than nukes https://x.com/MLStreetTalk/status/1944471651652981190
Analysis
- Surveyed 5000+ US users to find the top and the bottom AI use case: https://x.com/deedydas/status/1938277983841865769
Blog post
- ⭐️ Great post about superintelligence-run robot economy doubling times: https://x.com/DKokotajlo/status/1938287247805321427
- ⭐️ A greater theory of system design: what’s wrong with modernity and post-modernity, how to survive the coming avalanche, and how to fix the major problems we are facing. Part one: Systems are Models. But what’s a Model? https://x.com/eshear/status/1937342576874664006
- ⭐️ Project Vend. We had Claude run a small shop in our office lunchroom. Claude ended up losing all the money: x.com/AnthropicAI/status/1938630294807957804
- ⭐ Gary Marcus: great read on neurosymbolic AI vs pure neural network (connectionist AI) https://x.com/GaryMarcus/status/1944446877216182282
- How people use claude for emotional support https://x.com/AnthropicAI/status/1938234981089763649
- Chakra: Computer Use Agent https://x.com/chakra_ai/status/1938308053398364402
- business are rehiring engineers https://x.com/MrEwanMorrison/status/1944670240782045308
AI builders
- ⭐️ cli battle royale with claude, codex, openai, and gemini https://x.com/SIGKITTEN/status/1937950811910234377
- ⭐️ "Your fancy AI scaffolds will be washed away by scale" https://x.com/latentspacepod/status/1944507223574544619
- ⭐️ Whoever communicates best becomes the most valuable programmer. https://x.com/swyx/status/1943717709071757757
- you can use o3 and o4-mini deep research via api now https://x.com/swyx/status/1938399666330341831
- gemini code limits are way high https://x.com/natolambert/status/1937874779408593192
- Model Maxxing video from AI.Engineer conference https://x.com/ilanbigio/status/1937577195624710327
- gemini cli https://x.com/googleaidevs/status/1937861646082515205
- gemini cli https://x.com/GoogleCloudTech/status/1937860467843625124
- AI chat building a dashboard challenge: openai, claude (winner), gemini https://x.com/maikonsch/status/1937200085948846459
- agentic systems https://x.com/omarsar0/status/1936460206424113331
- training deepseek V3 https://x.com/Mayank_022/status/1944680354981544441
- coding agents slow ai builders 19% - weird https://x.com/eshear/status/1944867426635800865
- AWS coding agent https://x.com/ajassy/status/1944785963663966633
- library to convert any file to MD https://x.com/HeyNina101/status/1944400488515977651
- repomix is an online tool that let you drop a github repo and extract a single file https://x.com/tetsuoai/status/1944286053806092631
- vibe coding state https://x.com/mattppal/status/1944034025989190039
- claude campus https://x.com/AnthropicAI/status/1943340396983046349
GEO Politics
- ⭐ China electricity is the Sputnik moment https://x.com/peterwildeford/status/1944784452229435745
- Pretty striking quotes from today's Congressional hearing on AI. Here's what jumped out to me, from my time working on AGI Readiness at OpenAI: "Algorithms and Authoritarians" - grappling with the impacts of powerful AI (thread) [https://x.com/sjgadler/status/1937977548912398798,](https://x.com/sjgadler/status/1937977548912398798) https://x.com/vitrupo/status/1938138544360530079
- eu ai regulation https://x.com/rohanpaul_ai/status/1943700280887193690
DeAI
- We did it — SYNTHETIC‑2 is complete. A planetary-scale decentralized inference run generating 4M verified reasoning samples. 1,250+ GPUs joined in 3 days — from 4090s to H200s — creating data for complex RL tasks. Full open-source release + technical report coming next week! https://t.co/RAEI3NQ4GL https://pbs.twimg.com/media/Gubema4bMAAWsU0.jpg https://x.com/PrimeIntellect/status/1938490370054361422
- synth data https://x.com/vincentweisser/status/1943427747717722490
lol
- only jobs left in the future https://x.com/Hesamation/status/1937954641750192532
- How I feel after launching 5 instances of Claude Code.. https://t.co/ziE0LffYvj https://video.twimg.com/amplify_video/1936803835260829696/vid/avc1/720x646/Yq62zP6HLxFIOE95.mp4?tag=21 https://x.com/nikunj/status/1937287421554753920
- the x86 CPU driving 8 B200s https://x.com/tenderizzation/status/1937141444101226498
- getting face tattoos starting tomorrow https://t.co/TsZxQwNART https://prod-media-backups.s3.amazonaws.com/images/1936925087690547529/GuFW7PzWMAAI9Rk.jpg https://x.com/ThePrimeagen/status/1936925087690547529
- i'm building cursor for- https://x.com/michellewang857/status/1935878995499008486
- AI after it starts training on AI-generated text https://x.com/oldbooksguy/status/1935337400340987962
- lol https://x.com/ibocodes/status/1944759057891352978
- “Thrilled to announce 14 papers accepted from our lab” The lab: https://t.co/Pm3Syy3ema https://video.twimg.com/amplify_video/1944209316338135040/vid/avc1/702x954/M9aZAAs5uCOX3Abt.mp4?tag=14 https://x.com/docmilanfar/status/1944209375264223396
- I just witnessed a masterpiece 😂🤣 https://t.co/E57QNcuOww https://video.twimg.com/ext_tw_video/1943859920119480320/pu/vid/avc1/720x1280/L2CyMYhxRuXYH5BM.mp4?tag=12 https://x.com/GozukaraFurkan/status/1943860842883137658
Updates
- chat gpt iphone app download is dwarfing anything else https://x.com/sama/status/1937514123912491317
- meta goes close source https://x.com/shaneguML/status/1944873346359062935
Philosophy and AGI
- ⭐ Cognitive scientist Elan Barenholtz says memory isn't retrieval. It's generation. When you remember something, you're not accessing a stored file. You're prompting your mind, like an AI model, to synthesize a response. There is no image of your mother. https://x.com/vitrupo/status/1937745413886619764
- AGI should continue the light of consciousness https://x.com/mbrendan1/status/1944505503029305826
- What if prediction is the core behind intelligence? https://x.com/blaiseaguera/status/1937587236893311176
- What if human-text to llm is a local minima, and thought to llm will be the right UX https://x.com/MinqiJiang/status/1936780076462305449
- Social media and video will get even more addictive https://x.com/karpathy/status/1936931329872126426
- AI is like a wave of a billion immigrants https://x.com/vitrupo/status/1936585212848451993
- Humanity role is to create what's next https://x.com/vitrupo/status/1944036120767214076
Visuals
- Good morning https://t.co/QJlvagWKxf https://video.twimg.com/amplify_video/1937156610704482304/vid/avc1/1308x1584/ic5p01FoDCZ0vb95.mp4?tag=21 https://x.com/babs69420/status/1937156673657086042
- No description available https://x.com/doganuraldesign/status/1934672992555946027
- ghibli video https://x.com/colin_fraser/status/1934673376242753802
- I wonder how many more versions of this trend we’ll see before it dies https://t.co/lDp1iJRbU6 https://video.twimg.com/ext_tw_video/1943907454263451648/pu/vid/avc1/576x864/o6KUjhlmmr7qEZR4.mp4?tag=12 https://x.com/nickfloats/status/1944129082654175687
- Memory Stacking https://t.co/9aqZjkHhRT https://video.twimg.com/amplify_video/1944101831455461376/vid/avc1/720x720/cLVI5KyW6CO7nfr6.mp4?tag=14 https://x.com/joepease/status/1944102644391575809
- thru the static https://t.co/XlELGy133P https://video.twimg.com/amplify_video/1943927003402457088/vid/avc1/1280x720/Pzfd50jYcR3EgSOE.mp4?tag=14 https://x.com/poetengineer__/status/1943927073392738332
- POV: Vibe coding with Grok https://t.co/pmaEwaciWX https://video.twimg.com/amplify_video/1943701988035739649/vid/avc1/720x720/iAMPx4yHsUFzBPBY.mp4?tag=14 https://x.com/doganuraldesign/status/1943702046344884312
AI tools
- ⭐️ Dia AI browser https://x.com/joshm/status/1943317578740543693
- figma + cursor baby https://x.com/mattiapomelli/status/1937171695632089115
- Record mode, openai meeting assistant https://x.com/OpenAI/status/1935419375600926971
- Google search live https://x.com/Google/status/1935381117772681424
Learning
- ⭐️⭐️ learning PPO and GRPO https://x.com/TheTuringPost/status/1936544719292756242
- fine tune deepseek R1 https://x.com/_avichawla/status/1937397782316482783
- Understanding P-Values is essential for improving regression models. In 2 minutes, I'll crush your confusion. Let's go: https://t.co/YWSKcC9gZa https://prod-media-backups.s3.amazonaws.com/images/1936810278487785574/GuDurtIWcAA-bmK.jpg https://x.com/mdancho84/status/1936810278487785574
- read RL https://x.com/RichardSSutton/status/1935721352142671975
- the economics of compute https://x.com/var_epsilon/status/1944967214496411839
- Differential geometry https://x.com/mrsiipa/status/1945094834089230691
- gradient descent https://x.com/Ben_Hoov/status/1943137162699882677
AI agents
- arxiv search agent https://x.com/askalphaxiv/status/1934976072258617422
- Sandar Pitchar: first AI agents finding and fixing a security exploit https://x.com/sundarpichai/status/1945109878990627106
Hardware and Infra
- ⭐ Zuck confirm they're building multiple GW clusters https://x.com/SawyerMerritt/status/1944786808627192265, https://x.com/rohanpaul_ai/status/1944619983209963613 https://x.com/SemiAnalysis_/status/1944813006669668609
- hair-thin silicon chip https://x.com/rohanpaul_ai/status/1944189756692476225
- beefy mac https://x.com/Teknium1/status/1944203373508800654
- intel lost the ai battle https://x.com/rohanpaul_ai/status/1943802592305590766