The Bandwidth Problem Nobody Talks About

Remote desktop has a dirty secret: it's a bandwidth hog.

Streaming a 1080p desktop at 60fps requires roughly 8-15 Mbps with modern codecs. That's fine on fiber. It's fine on a strong WiFi connection at home. But step onto hotel WiFi, tether from your phone, or try to work from a coffee shop in a busy city — and everything falls apart.

The current solution is depressingly simple: when bandwidth drops, reduce quality. Lower the resolution. Drop the framerate. Accept blurry text and choppy video until conditions improve. Every remote desktop app does this. It's been the only option for twenty years.

What if there's a better way?

What if, instead of sending a crisp 1080p stream and degrading it when conditions get rough, you could send a small 720p stream all the time — and make it look like 1080p on the other end?

That's AI super-resolution. And it's not science fiction. Gaming has been doing it for years.

The Gaming Industry Already Solved This

In 2018, NVIDIA introduced DLSS — Deep Learning Super Sampling. The concept was almost too good to believe: render a game at lower resolution, then use a neural network to upscale it to look like native resolution. Less work for the GPU, same visual quality.

Gamers were skeptical. How could an AI-generated image look as good as natively rendered pixels? But the results spoke for themselves. By 2026, DLSS 4.5 uses transformer models that produce images many players actually prefer to native rendering — sharper edges, better temporal stability, fewer artifacts.

AMD followed with FSR. Apple introduced MetalFX. Intel shipped XeSS. The entire industry converged on the same insight:

"The smartest pixel is the one you never have to render."

It took the gaming industry about five years to go from "this is a gimmick" to "this is mandatory." Remote desktop hasn't even started.

Applying Super-Resolution to Streaming

The translation from gaming to remote desktop is surprisingly clean. In gaming, you render at low resolution and upscale. In streaming, you encode at low resolution and upscale.

Here's what the pipeline looks like:

Traditional Pipeline
Capture 1080p Encode 1080p ~12 Mbps Decode 1080p Display
AI Super-Resolution Pipeline
Capture 1080p Encode 720p ~5 Mbps Decode 720p AI Upscale Display 1080p

Same visual quality. Roughly half the bandwidth. And because you're encoding a smaller image, the host computer works less too — lower CPU usage, less heat, better battery life on laptops.

720p
Encoded Resolution
1080p
Displayed Quality
~55%
Bandwidth Savings

On-Device AI: Private by Design

Here's where it gets really interesting — and where we differ from how you might expect AI to work.

The super-resolution model runs entirely on your device. Not in the cloud. Not on some server. On the phone or laptop in your hands.

Modern devices are absurdly powerful for on-device AI. Apple's Neural Engine on M-series chips handles 15.8 trillion operations per second. Qualcomm's Hexagon NPU on recent Snapdragon chips is in the same league. These dedicated AI processors sit idle most of the time — we're putting them to work.

The inference happens in the rendering pipeline, between decode and display. The model sees a 720p frame and outputs a 1080p frame. Total added latency? Under 5 milliseconds. You'll never notice it.

And because everything runs locally:

This aligns perfectly with Remio's privacy-first philosophy. We don't want your data. We don't even want to see your screen content. On-device AI lets us deliver cutting-edge features while knowing literally nothing about what you're doing.

Why This Only Works on Native Apps

Here's the part that ties everything together — and why we wrote an entire post about going native.

To run a neural network in the rendering pipeline without adding visible latency, you need:

  1. Access to the Neural Engine / NPU — This requires native APIs. CoreML on Apple. NNAPI on Android. There is no web API for the Neural Engine. Electron apps can't access it.
  2. GPU pipeline integration — The upscaled frame needs to go directly to the display without extra copies. Metal and Vulkan let you do this. WebGL does not give you this level of control.
  3. Sub-5ms timing — Every extra millisecond matters in a real-time streaming pipeline. JavaScript's garbage collector alone can introduce unpredictable pauses. Native code gives deterministic timing.

An Electron app would need to: decode the video in Chromium's media pipeline, extract the frame to JavaScript, somehow invoke the Neural Engine (impossible without native bindings), get the upscaled frame back, and display it through Chromium's compositor. Each step adds latency and complexity. The total overhead would likely be 15-25ms — too slow for real-time use.

"Super-resolution in the streaming pipeline isn't just an AI problem. It's a systems engineering problem. And systems engineering demands native code."

This is why, two years into building Remio, no Electron-based remote desktop has shipped AI upscaling. It's not that they don't want to. The architecture won't let them.

The Technical Challenges (And How We're Solving Them)

We won't pretend this is easy. There are real challenges:

Model latency. The model must run in under 5ms per frame at 60fps. That leaves zero room for bloated architectures. We're using optimized convolutional models (not transformers — they're too slow for per-frame inference) specifically tuned for the Neural Engine's execution units.

Content diversity. Remote desktop content is wildly different from gaming content. Text, spreadsheets, code editors, video playback, design tools — the model needs to handle all of it. We're training on datasets that specifically include desktop UI elements, not just natural images.

Temporal stability. Frame-by-frame upscaling can cause flickering. Gaming solved this with temporal accumulation — using information from previous frames. We're exploring similar techniques, feeding motion vectors from the video codec into the upscaling model.

Power efficiency. Running a neural network 60 times per second on a phone sounds expensive. But the Neural Engine is designed for exactly this — it's more power-efficient than the GPU for matrix operations. Our early tests show less than 5% additional battery drain.

The Honest Challenge: Why This Is Harder Than Gaming

We've talked about DLSS and gaming super-resolution as inspiration — and they are. But intellectual honesty demands we explain why remote desktop upscaling is a fundamentally harder problem.

Gaming DLSS has unfair advantages. Game engines provide the AI with rich data that doesn't exist in our world:

In other words, gaming super-resolution is a guided reconstruction with rich auxiliary data. Remote desktop upscaling is blind super-resolution from lossy, compressed video — a much harder starting point.

This doesn't mean the approach is wrong. Even modest improvements in visual quality at lower bitrates deliver real bandwidth savings. But we want to be transparent: the quality ceiling for blind upscaling is lower than what DLSS achieves with full engine integration. We're building something genuinely useful, not claiming parity with a different technology.

What This Means for You

When AI super-resolution ships in Remio, you won't need to think about it. There's no toggle, no setting, no "AI mode." The app will automatically:

On fast connections, it might not need to upscale at all. On slow connections, it'll be the difference between "unusable" and "feels like I'm sitting in front of my computer."

That's the goal. Not AI you have to think about. AI that just makes everything work better, silently, privately, on the device you already own.

We're building it right now. And we're building it alongside a whole suite of AI features that will make Remio the most intelligent remote desktop app on the planet.

Stream less. See more. That's the future.

Share this