The most important AI news and updates from last month: Oct 15 – Nov 15.
Sign up to receive the mailing list!
🗓️ November Events
Let's start with announcing a new chapter for the AI NYC: AI Builders Milan. Roberto Stagi is taking-on the leadership to organize the AI aperitivi. We're super excited about this!
AI Aperitivo 1.0
Milano
Tuesday, November 18
AI Builders Milan is hosting the first AI Aperitivo 🍸🍷🫒🧀 bringing together Milan's top AI engineers, researchers, and founders for an evening of Socratic dialogues.
Event: AI Aperitivo 1.0


AI Dinner 15.0
New York
Wednesday, November 12
AI NYC is hosting another AI Dinner 🍲🍕🍺 , we'll discuss news and updates using this blog post to run the Socratic dialogues.
Event: AI Dinner 15.0
Hyperscalers News
Meta: Yann LeCun is out


McKinsey Survey — The State Of AI in 2025

AI use is widespread, but mostly are at early stage experimenting with AI and AI agents. High performers redesign workflows. Only 39% report financial impact (EBIT).
Link: mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
Extropic — Thermodynamic Computing
Extropic just released a new type of hardware called Thermodynamic Sampling Units (TSUs). Their approach, called thermodynamic computing, flips traditional computing on its head. Instead of fighting against the random "noise" (thermal fluctuations) in electronics to force clean 0s and 1s, they embrace that noise as the core of the computation.
There's a lot of controversy around it, from the hardware design that looks like a 3d printed mesh with unnecessary symbols around it, to the obfuscated technical details.

OK let's see how it works (according Extropic writing itself).
The Hardware Foundation: Probabilistic Bits (p-bits) in CMOS Chips
- Traditional chips (CPUs/GPUs) use transistors to suppress thermal noise, locking electrons into binary states (0 or 1) for deterministic logic.
- Extropic runs CMOS circuits (standard silicon tech) in "subthreshold" mode: low voltage, low frequency, where thermal noise dominates. Electrons aren't forced into fixed states—they fluctuate naturally between high and low-energy "wells" defined by neighboring voltages.
- These fluctuations create p-bits, which act like tiny switches that probabilistically flip based on energy. Low-energy states happen more often (higher probability), mimicking natural sampling from a distribution. It's like the electrons are "voting" on the best configuration through physics alone.
The Computation Process: Annealing via Energy Minimization
- You program the chip by setting "starting conditions and constraints" (e.g., voltages that define the energy landscape for your problem, like a Sudoku puzzle or an AI model's probability graph).
- The system "anneals": Electrons interact across the network, redistributing energy until it settles into the lowest-energy state. This happens in parallel—millions of p-bits explore possibilities simultaneously via thermal jitters, drawing samples from the target distribution in essentially one "settling" step.
- Analogy from X: It's like shaking a box of bouncy balls on a hilly landscape—they all roll to the valleys (optimal solutions) at once, instead of one ball searching sequentially. @EarningsNugget This is similar to quantum annealing (e.g., D-Wave systems) but at room temperature, no exotic cooling needed.
The Software Layer: Denoising Thermodynamic Models (DTMs)
- Extropic pairs this with DTMs, an algorithmic architecture for tasks like generative AI. It includes:
- Energy-Based Models (EBMs): Encode your problem as a probabilistic graph (e.g., word probabilities in a sentence).
- THRML Library: A framework (currently simulatable on GPUs/CPUs) that maps these to the hardware. It scales to 1 million p-bits for real demos, like solving optimization puzzles.
- The chip reads analog voltages, computes biases, lets noise settle the state, measures the output, and digitizes it. No heavy numerical simulation—physics handles the sampling natively.
Comparison between regular GPU/CPU and TSU

THRML: (simulated) probabilistic programming language
Extropic also released a probabilistic programming language and a python library to simulate how it runs. How to run THRML by David Shapiro.
AI Browser War 🌐
In the past few months we've seen a lot of new AI browser coming up. They're all chromium copy with extra AI features. Google has yet to upgrade Chrome with AI capabilities. Let's explore the new browsers:
Dia
The Internet Company Of New York — yes that's the name — started the AI browser trend with Arc Browser, and then evolved into Dia. The main feature of Dia is the AI sidebar that lets you talk with one or multiple pages at the same time. Dia was purchased recently by Atlassian — sadly for me, because it was my browser of choice and Atlassian reputation for high quality software is not the best, if you ever used JIRA you know what I mean.

Comet
Perplexity AI is expanding from search engine into other sectors, trying to capture a piece of the pie — and if you ask me, I believe they're trying to get purchased by one of the MANGO companies. The main feature of Comet is the AI assistant that lets you automate email/calendar/shopping.

Atlas
OpenAI launched Atlas in October. It much integrates ChatGPT in every page and it allows you to run AI agent observing what it does in the browser. It looks impressive at first, I've asked to duplicate the last Luma AI dinner event: it opened Luma, signed up, went to the settings page, and somehow it got stuck in a loop. It has a button "stop" that lets you take control, so at least for now you can stop it, continue the tasks manually, and then ask it to continue the automation.

Conclusion
Whatever browser you're using today, switching won't give you a 10x improvement, at least won't give you anythig more than just installing the chatgpt extension. But it's clear that the internet "explore and click" as we know it is going to change into an intent based internet.


| The best ChatGPT that $100 can buy.
⭐️ Andrej Karpathy, our AI legend, just dropped nanochat, a complete, end-to-end implementation of an LLM-based chat assistant like ChatGPT — but compact, clean, and easy to hack. The entire stack, from tokenization to web UI, is implemented in one minimal codebase with almost no external dependencies.
It’s designed to run on a single 8×H100 node, orchestrated by simple scripts like speedrun.sh, which execute the entire lifecycle — tokenization, pretraining, finetuning, evaluation, inference, and even web serving through a lightweight chat interface.
In short, Nanochat lets you train, run, and chat with your own LLM — all for about the cost of a weekend GPU rental.
⭐️ Read the this introduction doc to learn all the steps Nanochat execute with the speedrun.sh file: github.com/karpathy/nanochat/discussions/1.
Repo: github.com/karpathy/nanochat
Nanochat test: nanochat.karpathy.ai
Research
DeepSeek-OCR: Revolutionary Context Compression Through Optical 2D Mapping
DeepSeek AI has unveiled DeepSeek-OCR, a groundbreaking approach to compressing long contexts via optical 2D mapping. This innovative system demonstrates that vision-based compression can achieve remarkable efficiency in handling text-heavy documents, potentially revolutionizing how large language models (LLMs) process extensive textual information.
The DeepSeek-OCR system consists of two primary components: DeepEncoder and DeepSeek3B-MoE-A570M as the decoder. Together, they achieve an impressive 97% OCR precision when compressing text at a ratio of less than 10× (meaning 10 text tokens compressed into 1 vision token). Even at an aggressive 20× compression ratio, the system maintains approximately 60% accuracy.
Karpathy questions if all LLMs input should actually be images, the advantages are:
- more information compression (see paper) => shorter context windows, more efficiency
- significantly more general information stream => not just text, but e.g. bold text, colored text, arbitrary images
- input can now be processed with bidirectional attention easily and as default, not autoregressive attention - a lot more powerful.
- the tokenizer must go. It import all the ugliness of Unicode, byte encoding, and a lot of historical babbage and security jailbreak risks.
Links
- x.com/karpathy/status/1980397031542989305
- deepseek.ai/blog/deepseek-ocr-context-compression
- x.com/vllm_project/status/1980235518706401405
Language Models Are Injective And Hence Invertible — It's possible to find the initial prompt from an output.

· Claim: Decoder‑only transformer LMs are almost‑surely injective: different prompts map to unique last‑token hidden states; this holds at initialization and is preserved under gradient descent.
· Method: Prove components are real‑analytic, show collisions occur only on a measure‑zero parameter set, and that GD updates don’t move parameters into that set in finite steps.
· Evidence: Billions of collision tests on six SOTA LMs found no collisions.
· Algorithm (SipIt): Reconstructs exact input text from hidden activations by exploiting causality; sequentially matches each token’s hidden state given the known prefix; offers linear‑time guarantees.
· Failure cases: Applies to decoder‑only transformers with analytic activations and continuous initialization; quantization, weight tying, duplicated embeddings, or non‑analytic parts can break injectivity. OK there are ways to preserve "privacy" to the question.
Paper: arxiv.org/abs/2510.15511
LLM as a Judge — We’re stepping into a new era where AI doesn’t just predict behavior. It understands preference.

Instead of simulating clicks and scrolls, researchers let LLMs reason which playlist, feed, or product lineup you’d actually prefer.
And it worked. Across Amazon, Spotify, MovieLens, and MIND datasets, they found:
- LLMs can rank full slates (not just single items) with strong coherence
- Logical consistency directly predicts preference accuracy
- Pretrained models generalize no fine-tuning required
link: x.com/alxnderhughes/status/1988202281314251008
Does RL improve LLM reasoning?

This paper got top score at NeurIPS 2025. It aims at answering: does RL make LLM better reasoners?
The authors study Reinforcement Learning with Verifiable Rewards (RLVR) and find that while it improves accuracy for small k, it doesn’t create new reasoning patterns — meaning the base model still determines the upper limit of reasoning ability.
Interestingly, it’s distillation, not RL, that shows genuine signs of emergent reasoning 😮.
link: x.com/jiqizhixin/status/1987710546674856051
web: limit-of-rlvr.github.io
Continuos Autoregressive LLM

Tencent + Tsinghua just dropped a paper called Continuous Autoregressive Language Models (CALM) and it basically kills the “next-token” paradigm every LLM is built on.
Instead of predicting one token at a time, CALM predicts continuous vectors that represent multiple tokens at once.
Meaning: the model doesn’t think “word by word”… it thinks in ideas per step.
→ 4× fewer prediction steps (each vector = ~4 tokens)
→ 44% less training compute
→ No discrete vocabulary pure continuous reasoning
→ New metric (BrierLM) replaces perplexity entirely
link: x.com/rryssf_/status/1985646517689208919
Learning
Perplexity in NLP measures how well a language model predicts text; lower means better.
Videos
https://www.youtube.com/watch?v=iO03t21xhdk&feature=youtu.be&themeRefresh=1
https://www.youtube.com/watch?v=29gkDpR2orc&t=890s
MLST — AI benchmarks are broken! [Prof Melanie Mitchell]
I really love this part of the MLST interview in which Prof Mitchell says the key LLM question is: what kind of “understanding,” if any, is really going on?
- They don’t and can’t truly “understand” — it’s just word statistics.
- They do form rich, concept-like mental models.
- Or their huge correlations amount to a new, non-human kind of understanding.
https://youtu.be/fS-NN6VRzT8?si=0SJl24g9cIW1IVwm
In search of Nothing | David Deutsch, Lee Smolin, Amanda Gefter
https://www.youtube.com/watch?v=rMSEqJ\_4EBk
MLST — Google Researcher Shows Life "Emerges from Code"
Blaise Agüera y Arcas explores some mind-bending ideas about what intelligence and life really are—and why they might be more similar than we think
Full Sources List
AI Builders
- ⭐️ Cursor runs git subtree by default now so you can run multiple agents on the same code to test different changes x.com/leerob/status/1985527157959921778
- I thought Claude Code could have killed Cursor, but the sheer dev speed of Cursor team is unrivaled and clearly a moat x.com/mesMntainG2/status/1986303314687094793
- Cognition: our mental model for why codebase understanding is valuable and why vibe coding has a limit to scale, is expressed in this chart x.com/cognition/status/1985759796750876995
- Skills is easily one of the most effective ways to steer Claude Code x.com/omarsar0/status/1979242073372164306
- n8n Introduces AI Workflow Builder, turn prompts into workflows x.com/n8n_io/status/1977772692490105113
- Anthropic tips for developers on using Agent Skills x.com/AnthropicAI/status/1978896757489594404
- Claude Code tips for devs x.com/omarsar0/status/1988269255604007275
- How a top engineer code with AI x.com/basicprompts/status/1977763733482189123
- Idea: use MCP inside an MCP agent to save context x.com/goon_nguyen/status/1987720058504982561
Benchmark
- This AI trading benchmark is interesting. Each model got $10,000 to invest. ~3 x.com/Yuchenj_UW/status/1980318499185823760
Blog posts
- ⭐️ Technological Optimism And Appropriate Fear: an essay from Jack Clark (Anthropic cofounder) where he grapple with how he feels about the continued steady march towards powerful AI systems x.com/jackclarkSF/status/1977828314871218378, x.com/kimmonismus/status/1977809695231402154
- Technocalvinism: AI's Future is Not Predetermined: Steering Tech for Human Uplift x.com/luke_drago_/status/1983196922572878207
Consumer devices
- I'm building a ring to whisper thoughts to. 45 days in x.com/naveedlol/status/1982463560195453047
DeAI
- Article: who will accrue most value in x402 x.com/yashhsm/status/1985232040908685526
- At long last, the most comprehensive x402 market map new additions: Consumer x.com/henloitsjoyce/status/1982886266296500587
- x402 growth this week x.com/brian_armstrong/status/1981870391774884110
vLLM, Diffusion Models, and Audio Models
- ElevenLabs released Scribe v2 Realtime, their fastest, most accurate real-time Speech-to-Text model x.com/omarsar0/status/1988276885722460207
- Dalle3 vs Grok Image gen. 2 years of progress x.com/chatgpt21/status/1987988401987985434
- “Yeah I was actually into slop back in 2015. Deep dream, pretty underground stuff you probably haven’t heard of it. x.com/johncoogan/status/1980099369639858547
- Did we ever figure out why early AI image Gen looked exactly like a psychedelic x.com/jiratickets/status/1980110035113173270 -> yes we did x.com/juche_jong/status/1980111644383486002
Economics and geopolitics
- We're on the verge of a "Coasean Singularity, a future where AI agents make markets so efficient that the very idea of a "company" starts to crumble x.com/IntuitMachine/status/1981887967514792075
- AI scaling constraint is shifting from chips → transformers → energy. China understands that better than anyone x.com/patrick_oshag/status/1980321263726817551
Events
- Cafe Cursor was great x.com/anaskar/status/1980632684591493492
- AI dinner 14.0 x.com/Sei_Labs/status/1978829574038557141
Funding
- ⭐️ SoftBank sold its entire stake ($5B) in $NVDA x.com/BullTradeFinder/status/1988218515015561377
- ⭐ These are the top 15 most valued private AI companies, their latest public revenue x.com/deedydas/status/1977948419718279386, SSI has no revenue, product, or plan x.com/pmddomingos/status/1978049025669841404
- ⭐️ 4 charts suggesting we aren't in an AI bubble x.com/deedydas/status/1981028033869001149
- ⭐ Today's Extropic launch raises some new red flags x.com/liron/status/1983620086268117292
- ⭐️ Anthropic financials 💰: profitable by 2027, 3 years ahead of OAI, Claude Code nearing $1B ARR x.com/srimuppidi/status/1985749045047132527
- Can't stop thinking about @aigrant batch 1 x.com/nicoaferr/status/1978957996077047965
- Jensen Huang Praises Palantir Ontology x.com/amitisinvesting/status/1983227805065195573
- The US economy today is an Ouroboros x.com/AdameMedia/status/1982256724628123730
- Reid Hoffman: in all industries it’s important to fund the good guys x.com/reidhoffman/status/1980275162034704550
- What is Sam Altman’s strategy to get investors money for OpenAI? Perfect explanation from Matt Levine x.com/ns123abc/status/1977812313702052051
- Noumenal Labs building "Thermodynamic Brain" for robots using Extropic chip x.com/cyberFund_/status/1983586214570815799
- Honestly it’s kind of based that employees will own 3x as much of OpenAI as VCs x.com/luke_metro/status/1978174201224732712
- I don't know what people are mad at, they either don't understand the potential x.com/eigenron/status/1979392531105296476
- OpenAI is projecting historically unprecedented growth to $100 billion in revenue x.com/a16z/status/1982484887342129412
Hardware
- ⭐️ Google Quantum AI x.com/GoogleQuantumAI/status/1981016219340648778
- Sundar Pitchai: new breakthrough quantum algorithm published in @Nature today: Our Willow chip x.com/sundarpichai/status/1981013746698100811
- Anthropic will be the first company to train on 1GW cluster x.com/scaling01/status/1988023126484246652
- Study semiconductor manufacturing not to build a fab, not to get a job at TSMC x.com/oprydai/status/1979520688361971912
- Open AI is designing their own chips from what they learned x.com/OpenAI/status/1977794196955374000
- ASML video that explain how Source Mask Optimization (SMO) works x.com/lithos_graphein/status/1985124584060858464
- Google is exploring building scalable ML compute system in space x.com/sundarpichai/status/1985754323813605423
- China record: 16,000 drones guided by AI x.com/FutureStacked/status/1985735315617726872
- Why I will never buy a house, I’m seriously considering 2 more 3090s x.com/0x_Sero/status/1979819051389255854
- New GPU, who dis? local LLMs are about to cook x.com/cline/status/1978264327015837831
- Small screen x.com/YUDHO_XYZ/status/1977748884609716248
- We went to the moon and built SR71s with HAND DRAWN PCB. x.com/blind_via/status/1978139967395090482
- Pewdipie is now fine-tuning his own LLM models x.com/birdabo/status/1984288466952433739
Learning
- Learning RLVR environment in 27 min x.com/yacinelearning/status/1978451784021447157
LLMs
- ⭐️ SYNTH a fully synthetic generalist dataset for pretraining, and 2 new SOTA models x.com/willccbb/status/1987998615785402785
- ⭐️ Airbnb CEO Brian Chesky: “We’re relying a lot on Alibaba’s Qwen model. It’s very x.com/natolambert/status/1980657338726887662
- ⭐️ Karpathy
- excited to release new repo: nanochat! (it's among the most unhinged I've written) x.com/karpathy/status/1977755427569111362
- I quite like the new DeepSeek-OCR paper. It's a good OCR model (maybe a bit worse than dots), and yes data collection etc., but anyway it doesn't matter x.com/karpathy/status/1980397031542989305
- ⭐️ I'm excited to share what I've been working on since finishing my PhD earlier x.com/AdamJelley2/status/1978838343682031918
- All top open-weight models are now Chinese x.com/burkov/status/1977942735962206666
- LLMs are doubling their task length roughly every 7 months x.com/slow_developer/status/1987877905423069373
- Crypto is on its way to 4 billion by 2030 x.com/RaoulGMI/status/1961949793896317163
- Attention is all you need for KV cache in diffusion LLMs x.com/omarsar0/status/1979180865520570615
- Anthropic has overtaken OpenAI in enterprise large language model API market share x.com/StefanFSchubert/status/1982688279796625491
- Deep Research has really taken over x.com/zephyr_z9/status/1981810562552995958
- Deepseek is currently leading in the trading arena, 6 LLMs are given $10k each x.com/sandraaleow/status/1980473762756850140
- It's funny how almost a decade later, the frontier labs are back at the same spot, building training Gym x.com/TheSeaMouse/status/1979239427315474581

Lol (brutal jokes open at your own risk)
- ⭐ Browser war, me, my daily driver, and a peaceful corner of the internet x.com/karrisaarinen/status/1980664727417356508
- Fun fact: your internal body temperature spikes when infected because viruses use your body to train LLMs x.com/birdabo/status/1987135164619891041
- Sir, our chief AI scientist Yann LeCun in early talks to raise vc money for his own startup. He’s planning to leave meta and take top researchoors with him. it’s so fucking over x.com/ns123abc/status/1988299684004479254
- Russian bot falls on stage x.com/BohuslavskaKate/status/1988335548478755185
- NEWS: Taylor Swift to enter into a multibillion dollar deal with OpenAI to deploy 10GW data centers x.com/litcapital/status/1977758679974777296
- 2FA login just dropped x.com/DavidWells/status/1986515283809607945
- OpenAI: We need gov support to cover $1.4T in chips and data center. Chinese labs: old my beer x.com/Yuchenj_UW/status/1986856304808501577
- Next generation of developers x.com/catalinmpit/status/1980245046588067988
- ChatGPT offline version x.com/BLUECOW009/status/1987614644635123719
- Note to self: don’t fine tune LLMs on a laptop inside of the carry on while flying x.com/tunguz/status/1979658005349335245
- This is gonna hit the dating world like the discovery of penicillin hit Victorian times x.com/uncledoomer/status/1982286348267479343
- This paper remains the most blatant example of Silicon Valley's sexism. EIGHT (8) males published this 'novel' result... when it's something women have understood through indigenous ways of knowing since prehistory x.com/sierras_account/status/1981719619292242319
- this was a banger x.com/lu_sichu/status/1978264897042444769
- Okay, we definitely gave NVIDIA too much money x.com/theo/status/1981402596071080230
- One man to rule them all x.com/linuxopsys/status/1979372261741105621

- This is what an AI girlfriend looks like without makeup x.com/kmcnam1/status/1982339327578501272

- Now tell me where the seed phrase is x.com/gegelsmr4/status/1983649606815887625
- Anthropic "they can't keep getting away with this" x.com/AndyAyrey/status/1978415363562831930
- Write an horror story in 3 sentences: "1,333,573 rows affected" x.com/arpit_bhayani/status/1979902721303146850
- Am I overreacting for breaking up with my girlfriend over deleted texts? x.com/TheVixhal/status/1981026701703586178
- damn did karpathy pod just change the bubble burst timeline? x.com/tokenbender/status/1979610826697851287
- how it feels to know the location of a native CUDA kernel in pytorch x.com/tenderizzation/status/1980072197311152172
- ⚠️ NSFW ⚠️ Meta finally cracked VR: Suckerberg x.com/zephyr_z9/status/1977762484283936897
- Imagine losing first authorship because you got lost in mario kart x.com/luismbat/status/1984303751939702871
Opinions
- ⭐️ Karpathy: Agency > Intelligence I had this intuitively wrong for decades, I think due to x.com/karpathy/status/1894099637218545984
- Bezos' Wealth Disparity & Union Busting: A Call for Billionaire Taxes x.com/GunnelsWarren/status/1980674893520789567
- ⭐️ Edison Was Right. We took a century-long detour because Nikola Tesla figured out how to wiggle electrons up and down and move voltage around with coils of iron and copper x.com/rmcentush/status/1983241531923615876
- Small startups still have a moat x.com/himanshustwts/status/1988087634510655921
- All jobs will be remote x.com/djcows/status/1983976487041823210

Philosophy
- ⭐ New quantum research reveals time doesn't move forward but folds onto itself x.com/forallcurious/status/1981704513519251825
- ⭐️ Anthropic Research: signs of introspection in LLMs anthropic.com/research/introspection
- ⭐️ LLMs experience claims under self-reference are systematic, mechanistically gated, and convergent. We’re not making a claim LLMs are conscious. When something this reproducible emerges under theoretically-motivated conditions, it demands more investigation x.com/juddrosenblatt/status/1984336886417240269
- Video: David Deutsch, Lee Smolin, Amanda Gefter: In Search of Nothing youtube.com/watch?v=fS-NN6VRzT8
- The closer you look at biology the more starts feeling biomechanical x.com/redaction/status/1981752132123402579
- Questioning the Sexuality of Robotic Teleoperated Sex x.com/airkatakana/status/1983714533270286594
- Nick Bostrom says if we create ASI, we might be introducing it to a universe alr x.com/vitrupo/status/1982066702633582946
- Either consciousness Is an emergent feature or information processing x.com/iamgingertrash/status/1978319734451175605
- Stephen Wolfram says LLMs may have put the final nail in the coffin of the idea that consciousness is something magical beyond physics. What we call awareness, this “single thread of experience,” might have begun as a way for early animals to decide whether to turn left or right x.com/vitrupo/status/1982403150037585942
- Who do you think has a higher IQ? Person A, who says: “I think, therefore I am x.com/parakeetnebula/status/1979645055276315072
Random
- ⭐ A mollusc can change house with 100,000 neurons and max 1m parameters x.com/amasad/status/1977928516244021746
- ⭐The best movie on context engineering x.com/peterjliu/status/1982099832249712750
- ⭐ Sebastien Bubeck shows GPT-5 solving a 1958 Erdős math problem by digging up a forgotten 1961 German counterexample, translating it, and explaining the proof—proving AI can now find and connect buried research humans missed for decades x.com/SebastienBubeck/status/1980311866770653632
- Moss-fets x.com/i2cjak/status/1978223941207363770
- ⭐️ Teaching a Neural Network to Predict Pi's digits. Here's what we're doing, in the spirit of futility mixed with curiosity. x.com/hive_echo/status/1985974256061256085
- @DavidDeutschOxf: "I keep saying that an LLM is nothing like an AGI, and people x.com/DeutschExplains/status/1979829877801988228
- building a neural network visualizer from scratch, written in rust, supports x.com/0xSamHogan/status/1982909039790174635
- How a computer science guy sees the world x.com/2xBuild/status/1980122308695183767
- Human judgment at scale is easily the hardest problem in AI right now x.com/MLStreetTalk/status/1979808577306144808
- OpenAI logos shows some shape consistency x.com/marijanapav/status/1980704567064150108
- my partner and I printed and bound a physical collection of Janus's blog posts x.com/FioraStarlight/status/1980132882837565466
- Stainless steel pans are to cooking what Vim/Emacs is to programming At first, x.com/Ekaeoq/status/1983888830873669807
- there has to be a psychological study on the “Karpathy effect” and why literally x.com/Hesamation/status/1979872163768201283
- Today we’re sharing more details about improvements of the default GPT-5 model x.com/JoHeidecke/status/1982875268818841950
- For those of you who want to rebel against societal addictions and reclaim yours x.com/bryan_johnson/status/1977793059816652840
- Prediction: AI at Meta and elsewhere will write code like your mid-level engineer x.com/TheAhmadOsman/status/1983334919523971525
- Augmented reality + generative AGI means people can suddenly do expert-level work x.com/nosilverv/status/1978927674937479324
- AI games are going to be amazing (sound on) x.com/mattshumer_/status/1981406315693187430
- You do not hate AI enough x.com/PeterTwinklage/status/1978124963228860708
- Some days you really feel like we're seeing the future happen right before our e x.com/culpable_mink/status/1978892071332184230
- OpenAI Lead Researcher, Lukasz Kaiser: by 2030 most desk work will be automated x.com/slow_developer/status/1981805394289996219
- Companies that have adopted AI aren't hiring fewer senior but they cut on juniors x.com/cremieuxrecueil/status/1982528322044322038
- Sequoia's CPO @jesskah won't hire well-rounded people. She looks for a "spikes" in 1 of 4 traits that predict success:
• EQ: One-on-one people skills
• IQ: Raw intellectual horsepower
• PQ: Ability to navigate politics/systems
• JQ: Judgment on decisions that matter x.com/daraladje/status/1981397204511445240 - ok so what the hell are we doing here, exactly x.com/gfodor/status/1978178083388604813
- OpenAI just said the quiet part out loud: they’re building AI researchers x.com/VraserX/status/1980233982785585304
- There are people that believe history majors are more intelligent than engineers x.com/birdabo404/status/1978105454677905676
- When a black hole expert watches GPT-5 Pro solve in 30 minutes what took him day x.com/WesRothMoney/status/1979155430669717813
- If you listen to the whisper of the grid, you might be able to hear the heartbeat x.com/jwt0625/status/1978303148935545306
- One of the best website with great research too physicalintelligence.company
- Pretraining is still accelerating toward the noise floor and following scaling laws, somewhere along that path, it's possible that we get superintelligence x.com/vedantmisra/status/1987321357340975517
- Uber launches QueryGPT x.com/mdancho84/status/1984283102827491397
Research
- ⭐ GLADIA Clarifies LLM Training Data Extraction and Injectivity x.com/GladiaLab/status/1983812121713418606
- ⭐️ LLMs Injective and Invertible: Recovering Input from Embeddings x.com/GladiaLab/status/1982818213206315120
- ⭐️ DeepSeek does it again, an entire encyclopedia compressed into a single high-resolution image x.com/vllm_project/status/1980235518706401405
- ⭐️ LLM-as-a-Judge: Toward World Models for Slate Recommendation Systems. x.com/alxnderhughes/status/1988202281314251008
- ⭐️ Meta: instead of train agents inside real environments, DreamGym synthesizes experiences building a reasoning-based model that imagines realistic interactions and reward signals through step-by-step reasoning. x.com/godofprompt/status/1987823143927554182
- Apple just changed the game with AI. But it's not what you think. They used AI x.com/JacksonAtkinsX/status/1978083839466504641
- Genetics linked to intelligence, happiness, and mental health traits x.com/cremieuxrecueil/status/1980693292602765312
- LLMs Can Leak Exact Input Text From Hidden States, New Research Shows x.com/alex_prompter/status/1983584923693777099
- Ring-1T: Trillion-Parameter MoE Model for Reasoning via RL Scaling x.com/omarsar0/status/1980997089120444595
- LLMs can get "Brain Rot"! Continual pretraining on junk, high-engagement web tend to reduce the quality of output x.com/omarsar0/status/1979217719082774873
- People are sleeping on Deep Agents. Start using them now. This is a fun paper x.com/omarsar0/status/1980629163976675779
- This new research paper shows how to simulate millions of collisions with zero intersections, just pure physics x.com/theteknosaur/status/1980949212532748412
- tired: give an LLM a skill by fine-tuning it wired: give an LLM a skill by putt x.com/davidad/status/1978894573611941935
- Top AI Papers of The Week (October 13-19): Kimi-Dev, Elastic-Cache, Hybri x.com/dair_ai/status/1979893838123852216
- Demystifying RL in Agentic Reasoning: why does RL work for enhancing agentic reasoning? x.com/omarsar0/status/1978112328974692692
- Meta: The art of scaling RL compute for LLMs collaborators. x.com/omarsar0/status/1978865039529689257
- @Yoshua_Bengio become the first scientist with one million citations x.com/danijarh/status/1981985600078275036
- This paper proposes tensor logic, a language that solves these problems by unifying neural and symbolic AI at a fundamental level arxiv.org/abs/2510.12269 x.com/pmddomingos/status/1978333248888480079
- Fundamentals of Building Autonomous LLM Agents. Great overview of LLM-based agent x.com/omarsar0/status/1981793327956865504
- The biggest predictor of coding ability is Language Aptitude not Math. x.com/lauriewired/status/1984321532357964254
Robotics
- ⭐️ XPENG's IRON robot x.com/TheHumanoidHub/status/1986482482460725755
- ⭐ Unitree H2, new humanoid from China x.com/Scobleizer/status/1980142600196850061
- DLR researchers gave a robotic arm full-body touch sensitivity with no artificial skin needed x.com/stepjamUK/status/1978098426907668826
- Tesla Optimus 2022 -> 2025 x.com/XFreeze/status/1988338334067028398
- Sharp Robotics of Singapore has officially unveiled SharpaWave, an impressively x.com/CyberRobooo/status/1979534650466062484
- Should humanoids have tails? Current legged robots struggle with center of mass x.com/JacklouisP/status/1981767722150330574
- Did you imagine when you were a kid that you’d be embodied inside a humanoid robot someday? x.com/TheHumanoidHub/status/1983115559693889605
Videos and Podcasts
- ⭐️ MLST: The Universal Hierarchy of Life - Prof. Chris Kempes [SFI]
- ⭐️ This Simple Optimizer Is Revolutionizing How We Train AI [Muon] x.com/swyx/status/1978371690028560536
- ⭐️ MLST: Google Researcher Shows Life "Emerges From Code" youtube.com/watch?v=rMSEqJ_4EBk
- Julian Schrittwieser (Anthropic) - everyone is failing to understand the effect of AI x.com/deredleritt3r/status/1982152471431532840
- Mo Gawdat believes AGI may already be here. If machines can do nearly everything better, humanity’s role must be reconsidered x.com/WesRothMoney/status/1978899290366873975
- Simulating black holes in C++ youtube.com/watch?v=8-B6ryuBkCM
Visuals
- ⭐ The syncs are syncing at a record rate rn x.com/vid x.com/SeekingAnon/status/1979186342098555362
- gm, or as my dearly departed mom sometimes admonished me, “Wake up and smell the x.com/Barbara_Chira/status/1978104191030530431
- How it feels to close all the apps running on your mums phone x.com/SeekingAnon/status/1978120874826850506
- Remember who you were before the world shaped you x.com/SeekingAnon/status/1978801091169882129
- When the collective understands that where we exist is not just matter but also x.com/SeekingAnon/status/1978416951857701088
- Your mind is a garden. Tend to it. Water it with the word of life... x.com/SeekingAnon/status/1978278030528159820
- Visions #111 art https://x.com/EdouardMusic/status/1984232079710736684
World Models
- Chihiro's's Adventure AI game https://x.com/kimmonismus/status/1985623694828605770
- Introducing RTFM (Real-Time Frame Model): a highly efficient World Model that runs on a single H100 x.com/theworldlabs/status/1978839171058815380
- Introducing General Intuition and our $133.7M Seed from Khosla Ventures, General x.com/gen_intuition/status/1978823244338659498


