At first glance, it sounds like a fun conference anecdote. A Linux scheduler designed for Valve’s Steam Deck, a handheld gaming device, is now running inside Meta’s massive server fleet. One was built for games and tight latency budgets. The other powers one of the world’s largest hyperscalers. These two worlds are not supposed to overlap.
And yet, they do.
At this year’s Linux Plumbers Conference in Tokyo, Meta engineers revealed that SCX-LAVD — a scheduler originally created to meet the real-time demands of the Steam Deck — is being explored as a default scheduler for Meta’s servers. Not as a niche experiment. Not as a special-case optimization. But as a broadly applicable scheduling strategy across diverse hardware and workloads.
Once you look past the novelty, this turns out to be a surprisingly important signal about where Linux scheduling is heading.
Table of Contents
A scheduler born from games, not servers
SCX-LAVD stands for Latency-criticality Aware Virtual Deadline. It was developed by the Linux consulting firm Igalia under contract with Valve, with a very specific goal in mind: make Linux scheduling behave better for latency-sensitive workloads on constrained hardware.
The Steam Deck is not a typical PC. It has limited power, limited thermal headroom, and workloads — especially games — that are extremely sensitive to jitter and inconsistent frame timing. For this environment, raw throughput matters less than predictability. A single scheduling hiccup can be visible to the player.
SCX-LAVD was designed to reflect that reality. Instead of focusing purely on abstract fairness, it explicitly models how critical a task’s latency is and schedules accordingly. In practice, it has delivered performance that is comparable to, and in some cases better than, Linux’s newer EEVDF scheduler for these workloads.
That success led to adoption beyond the Steam Deck. Gaming-focused Linux distributions like CachyOS Handheld Edition and Bazzite began experimenting with it. Still, this all felt firmly rooted in the desktop and gaming world.
Until Meta got involved.
Why would Meta care about a Steam Deck scheduler?
The title of Meta’s Linux Plumbers Conference talk makes the question explicit: “How do we make a Steam Deck scheduler work on large servers?”
See also: Mastering the Linux Command Line — Your Complete Free Training Guide
The answer, it turns out, is that very little had to change.
Inside Meta, engineers explored SCX-LAVD as a potential fleet-wide default scheduler for servers that do not require highly specialized scheduling logic. What they found was that the scheduler adapted remarkably well to large machines with growing CPU counts, complex cache hierarchies, and diverse workloads.
In particular, LAVD showed strong behavior when it came to load balancing across CCX and last-level cache boundaries, while remaining stable across different CPU and memory configurations. Just as importantly, its behavior was predictable and understandable — a quality that matters deeply at hyperscale.
Meta now refers to this sched_ext-based design as “Meta’s New Default Scheduler,” at least for a broad class of internal use cases.
This is not about squeezing out the last percent of performance. It is about consistency, maintainability, and having a scheduler that behaves well across many scenarios without constant tuning.
The real story is sched_ext
SCX-LAVD is built on top of sched_ext, Linux’s extensible scheduling framework. That detail is easy to miss, but it is arguably the most important part of the story.
sched_ext allows schedulers to be developed and deployed with far more flexibility than traditional in-kernel scheduling classes. Instead of treating scheduling policy as a near-untouchable part of the kernel, it becomes something closer to an engineering component: testable, replaceable, and adaptable.
This is what made the journey from Steam Deck to Meta’s servers even possible. A scheduler optimized for one environment could be evaluated, adjusted, and scaled up without rewriting the kernel’s core logic or committing to a one-size-fits-all design.
In that sense, Meta is not just adopting a scheduler. It is validating an entirely new way of thinking about Linux scheduling.
Latency matters at every scale
There is also a deeper lesson hiding in this story.
We often assume that servers care about throughput while consumer devices care about latency. In reality, large-scale systems are full of latency-sensitive tasks: request handling, coordination services, tail-latency-critical workloads, and internal control planes that must respond quickly and consistently.
The same principles that make games feel smooth on a handheld device can make services behave more reliably in a data center. SCX-LAVD works in both places not because those systems are similar, but because latency awareness is a universal concern.
What changes is the scale, not the nature of the problem.
From niche optimization to default behavior
Perhaps the most telling detail is how Meta positions this scheduler internally. SCX-LAVD is not being framed as an exotic optimization. It is being explored as a default — something good enough for most workloads, on most machines, most of the time.
That mindset matters. At hyperscale, the value of a scheduler that works well everywhere often outweighs the benefits of one that is perfect somewhere. Predictability, explainability, and operational simplicity are first-class features.
In that light, the journey of SCX-LAVD starts to make sense. It was never really about games. It was about making scheduling decisions align more closely with real workload needs, instead of abstract models.
A quiet shift in Linux scheduling philosophy
A decade ago, the idea that a scheduler designed for a gaming handheld would influence server-side Linux scheduling would have sounded absurd. Today, it feels almost inevitable.
Linux is slowly moving away from the idea of a single, universal scheduling philosophy. With frameworks like sched_ext, it is becoming a platform for experimentation, where different models can coexist and prove themselves in production.
SCX-LAVD’s path — from the Steam Deck, to gaming distributions, to Meta’s servers — is a powerful example of that shift. It shows that good ideas in systems design can travel surprisingly far, especially when they are grounded in real workload behavior.
Sometimes, the future of the data center really does start in a game.




