§ 01 · Algorithm anatomy

How the X recommendation algorithm actually works.

Synthesized from twitter/the-algorithm (March 2023) and xai-org/x-algorithm (May 2026). The pipeline is named; the prediction targets are explicit; the weights are partial.

Pipeline · the four named components
01 · Thunder
Thunder · in-network post store

Thunder is the real-time ingestion and serving layer for tweets from accounts you follow. Posts are streamed in via Kafka, indexed by author, and held in an in-memory candidate store with configurable retention. Thunder supplies roughly half of the candidates in every For You refresh — the same ~50/50 in-network split that has held since 2023, just with cleaner engineering. Anything you see from someone you follow flows through Thunder.

02 · Phoenix
Phoenix · out-of-network retrieval + ranking

Phoenix is the ML core. It runs in two stages. A two-tower retrieval model searches the global tweet corpus by embedding similarity and returns ~1,000–2,000 out-of-network candidate tweets per refresh. A transformer ranker then scores both Thunder candidates and Phoenix candidates against the same head. Phoenix replaces the 2023 SimClusters / TwHIN / Real-Graph mosaic with one learned model. Out-of-network distribution — the door to virality outside your follower graph — is mechanically Phoenix's output.

03 · Grox
Grox · content-understanding service

Grox is the auxiliary service that supplies content features: classifiers (topic, sensitivity, language, spam), embedders (the model that converts a tweet into the vector Phoenix uses), and task execution for content-side feature pipelines. Grox is the reason a tweet's *what* — its topic, its tone, its claim type — gets read by the ranker. The 2026 release shipped Grox as a standalone gRPC service, decoupling content understanding from ranking. A consequence: the same Grox embeddings power For You ranking, Trending detection, and Community Notes' "helpful vs. unhelpful" classifiers.

04 · Home Mixer
Home Mixer · orchestration and feed assembly

Home Mixer is the orchestration layer. It pulls candidates from Thunder and Phoenix, calls the Heavy Ranker (transformer) for scoring, applies post-rank filtering and rules, interleaves ads, follow recommendations, and Community Notes annotations, and returns the final feed page. The For You tab you scroll is Home Mixer's output. The Following tab — pure chronological in-network — bypasses Phoenix and most of Home Mixer's rules but still uses Thunder for ingestion.

The fifteen prediction targets

Per candidate tweet, the Heavy Ranker outputs fifteen engagement probabilities. The combination function is learned, not hand-tuned, but the ordering of which probabilities matter most has held for a decade.

P(favorite)
P(点赞)
positive
P(reply)
P(回复)
positive
P(repost)
P(转发)
positive
P(quote)
P(引用转发)
positive
P(click)
P(点击)
neutral
P(profile_click)
P(点击主页)
positive
P(video_view)
P(视频观看)
positive
P(photo_expand)
P(图片展开)
positive
P(share)
P(分享)
positive
P(dwell)
P(停留)
positive
P(follow_author)
P(关注作者)
positive
P(not_interested)
P(不感兴趣)
negative
P(block_author)
P(拉黑作者)
negative
P(mute_author)
P(屏蔽作者)
negative
P(report)
P(举报)
negative
Signal weights · 2023 historical values

These weights are from the March 2023 twitter/the-algorithm release. The May 2026 xai-org rewrite does not publish current values, so treat these as directional — the ranking order has held; the magnitudes have almost certainly shifted.

27×
Reply (with conversation) · 回复(带对话)
The single largest amplifier. A reply that itself attracts engagement compounds the parent's reach. The 2023 open-source weight was ~13.5×; conversation-bearing replies appear to be weighted higher under the 2024–2026 model.
positive
12×
Profile click · 点击主页
A reader leaving the timeline to inspect your profile is one of the strongest signals to the Heavy Ranker. Your bio + pinned post is, mechanically, a conversion page.
positive
1.5×
Quote tweet · 引用转发
Slightly more weight than a plain retweet because it forces the QT author to commit a comment. Adds a fresh impression surface in their own follower graph.
positive
1×
Retweet · 转发
Open-source weight: 1×. Still the cleanest "this is worth carrying outside my graph" signal. Algorithmic effect has decreased mildly as out-of-network candidate sourcing has improved.
positive
1×
Bookmark · 收藏
Added as a ranking signal in 2023–24. The user does not need to leave the timeline. Bookmarks are private-by-default but visible to the algorithm. Treated as a strong "save" intent.
positive
0.5×
Like · 点赞
Open-source weight: 0.5×. The cheapest engagement action and therefore the lowest signal. A post that gets only likes (no replies, no RTs) is read as "agreeable but not propagating."
positive
0.005×
Video view ≥ 50% · 视频观看 ≥ 50%
Per-impression weight is tiny — but media-bearing tweets get a separate distribution boost that does not show up in this multiplier. Net effect: video > image > text on equivalent engagement.
media
0×
Image attached · 附图
Not a multiplier directly. An image dramatically reduces scroll-past rate in the For You feed, which lifts impression-to-engagement ratio and feeds the ranker secondarily.
media
0×
Time spent on tweet (>2s) · 停留时长(>2 秒)
Dwell time was added as a signal in 2023. Long-form / thread tweets where users actually pause to read are favored. Hooks that delay the scroll are mechanically rewarded.
neutral
0×
Outbound link click · 外链点击
Click on a hyperlink is *not* a meaningful positive signal — and the presence of an outbound link is itself a mild deboost (algorithm penalizes attention-leaving content). Hence the "link in first reply" trick.
neutral
-74×
Mute author · 屏蔽作者
Open-source negative weight: −74×. One mute roughly cancels three median posts of distribution. The signal extends to similar content from the same author.
negative
-74×
Block author · 拉黑作者
Same penalty as a mute. Plus it tells the ranker to never serve this account to this user again, plus the muted/blocked count is itself a heuristic Twitter uses for reputation.
negative
-369×
Report · 举报
Open-source negative weight: −369×. By far the most punishing engagement. A small number of reports can functionally remove a post from the For You feed even before human review.
negative
-25×
"Not interested in this post" · 「不感兴趣」
Lighter than a mute, but still strong. The signal is per-post + a soft hint toward the author's broader topic cluster.
negative
Doctrine
§ 01

Two open-source releases · two eras

There are now two public releases of the X recommendation algorithm. The first — twitter/the-algorithm — landed on 31 March 2023, as a Scala/Python code dump including a MaskNet-based "Heavy Ranker" and explicit numerical weights for each engagement signal (the famous 13.5× for reply, 12× for profile click, 0.5× for like, −74× for mute/block, −369× for report). The second — xai-org/x-algorithm — landed on 15 May 2026, as a near-complete rewrite under xAI. The 2026 release introduces named pipeline components (Thunder, Phoenix, Grox, Home Mixer), replaces the legacy ranker with a Grok-based transformer, and explicitly eliminates "every single hand-engineered feature and most heuristics from the system." Critically, the 2026 release does not publish current production weights. The 13.5× et al. numbers are historical artifacts of the 2023 release and should be treated as directional, not gospel.

§ 02

The fifteen prediction targets

The 2026 Heavy Ranker outputs fifteen probabilities per candidate tweet: eleven positive social actions (favorite, reply, repost, quote, profile_click, video_view, photo_expand, share, dwell, follow_author), one neutral input (click), and four negative signals (not_interested, mute_author, block_author, report). The ranker's job is to predict each action's probability and combine them into a single rank. The combination function is no longer a hand-tuned weighted sum; it is learned. But the ordering of which signals matter most has stayed remarkably stable across the two releases. Replies and profile clicks remain the heaviest positive amplifiers. Mutes, blocks, and reports remain catastrophic. The doctrine survives the rewrite.

§ 03

The one mental model worth keeping

If you keep a single picture of the X algorithm in your head, keep this. Every post you publish is, within seconds, drawn into Thunder for your follower graph and (if it survives initial engagement) into Phoenix for out-of-network surfacing. Phoenix retrieves it, the transformer scores it on fifteen predicted reader actions, Home Mixer assembles the page. The actions are not equal. Replies and profile clicks compound. Bookmarks (now "share") and reposts carry. Dwell time and photo expand and video view buy a second look. Mutes, blocks, and reports kill the post outright. Every tactic in this playbook is a way of shifting those fifteen predicted probabilities in your favor — making the readers the ranker imagines do exactly what the ranker rewards.

§ 04

What the 2026 rewrite changed about practice

Four practical shifts. (1) Grox is a separate service, so content-understanding now applies the same embedding model across For You, Trending, and Community Notes — a tweet's topic classification follows it consistently across surfaces. (2) The two-tower retrieval in Phoenix is more sensitive to embedding similarity than the older SimClusters mosaic, which means topical consistency in your posting history compounds harder than it used to. (3) The end-to-end transformer reads each tweet as a sequence, so first-line legibility, structure, and recognizable patterns matter more than they did in the bag-of-features 2023 model. (4) The Heavy Ranker's combination function is learned, so you cannot game it by maximizing one signal — you must shift a coherent cluster of probabilities together.