How the X recommendation algorithm actually works.
Synthesized from twitter/the-algorithm (March 2023) and xai-org/x-algorithm (May 2026). The pipeline is named; the prediction targets are explicit; the weights are partial.
Thunder is the real-time ingestion and serving layer for tweets from accounts you follow. Posts are streamed in via Kafka, indexed by author, and held in an in-memory candidate store with configurable retention. Thunder supplies roughly half of the candidates in every For You refresh — the same ~50/50 in-network split that has held since 2023, just with cleaner engineering. Anything you see from someone you follow flows through Thunder.
Phoenix is the ML core. It runs in two stages. A two-tower retrieval model searches the global tweet corpus by embedding similarity and returns ~1,000–2,000 out-of-network candidate tweets per refresh. A transformer ranker then scores both Thunder candidates and Phoenix candidates against the same head. Phoenix replaces the 2023 SimClusters / TwHIN / Real-Graph mosaic with one learned model. Out-of-network distribution — the door to virality outside your follower graph — is mechanically Phoenix's output.
Grox is the auxiliary service that supplies content features: classifiers (topic, sensitivity, language, spam), embedders (the model that converts a tweet into the vector Phoenix uses), and task execution for content-side feature pipelines. Grox is the reason a tweet's *what* — its topic, its tone, its claim type — gets read by the ranker. The 2026 release shipped Grox as a standalone gRPC service, decoupling content understanding from ranking. A consequence: the same Grox embeddings power For You ranking, Trending detection, and Community Notes' "helpful vs. unhelpful" classifiers.
Home Mixer is the orchestration layer. It pulls candidates from Thunder and Phoenix, calls the Heavy Ranker (transformer) for scoring, applies post-rank filtering and rules, interleaves ads, follow recommendations, and Community Notes annotations, and returns the final feed page. The For You tab you scroll is Home Mixer's output. The Following tab — pure chronological in-network — bypasses Phoenix and most of Home Mixer's rules but still uses Thunder for ingestion.
Per candidate tweet, the Heavy Ranker outputs fifteen engagement probabilities. The combination function is learned, not hand-tuned, but the ordering of which probabilities matter most has held for a decade.
These weights are from the March 2023 twitter/the-algorithm release. The May 2026 xai-org rewrite does not publish current values, so treat these as directional — the ranking order has held; the magnitudes have almost certainly shifted.
Two open-source releases · two eras
There are now two public releases of the X recommendation algorithm. The first — twitter/the-algorithm — landed on 31 March 2023, as a Scala/Python code dump including a MaskNet-based "Heavy Ranker" and explicit numerical weights for each engagement signal (the famous 13.5× for reply, 12× for profile click, 0.5× for like, −74× for mute/block, −369× for report). The second — xai-org/x-algorithm — landed on 15 May 2026, as a near-complete rewrite under xAI. The 2026 release introduces named pipeline components (Thunder, Phoenix, Grox, Home Mixer), replaces the legacy ranker with a Grok-based transformer, and explicitly eliminates "every single hand-engineered feature and most heuristics from the system." Critically, the 2026 release does not publish current production weights. The 13.5× et al. numbers are historical artifacts of the 2023 release and should be treated as directional, not gospel.
The fifteen prediction targets
The 2026 Heavy Ranker outputs fifteen probabilities per candidate tweet: eleven positive social actions (favorite, reply, repost, quote, profile_click, video_view, photo_expand, share, dwell, follow_author), one neutral input (click), and four negative signals (not_interested, mute_author, block_author, report). The ranker's job is to predict each action's probability and combine them into a single rank. The combination function is no longer a hand-tuned weighted sum; it is learned. But the ordering of which signals matter most has stayed remarkably stable across the two releases. Replies and profile clicks remain the heaviest positive amplifiers. Mutes, blocks, and reports remain catastrophic. The doctrine survives the rewrite.
The one mental model worth keeping
If you keep a single picture of the X algorithm in your head, keep this. Every post you publish is, within seconds, drawn into Thunder for your follower graph and (if it survives initial engagement) into Phoenix for out-of-network surfacing. Phoenix retrieves it, the transformer scores it on fifteen predicted reader actions, Home Mixer assembles the page. The actions are not equal. Replies and profile clicks compound. Bookmarks (now "share") and reposts carry. Dwell time and photo expand and video view buy a second look. Mutes, blocks, and reports kill the post outright. Every tactic in this playbook is a way of shifting those fifteen predicted probabilities in your favor — making the readers the ranker imagines do exactly what the ranker rewards.
What the 2026 rewrite changed about practice
Four practical shifts. (1) Grox is a separate service, so content-understanding now applies the same embedding model across For You, Trending, and Community Notes — a tweet's topic classification follows it consistently across surfaces. (2) The two-tower retrieval in Phoenix is more sensitive to embedding similarity than the older SimClusters mosaic, which means topical consistency in your posting history compounds harder than it used to. (3) The end-to-end transformer reads each tweet as a sequence, so first-line legibility, structure, and recognizable patterns matter more than they did in the bag-of-features 2023 model. (4) The Heavy Ranker's combination function is learned, so you cannot game it by maximizing one signal — you must shift a coherent cluster of probabilities together.