A field log of getting NVIDIA Isaac Sim 5.1.0 running for WebRTC streaming on a fresh Ubuntu 25.10 box with two RTX PRO 6000 Blackwell GPUs. Some of these problems aren't in any docs; the fixes are.

TL;DR
- Driver
595.xships in Ubuntu 25.10 and silently breaks Isaac Sim on Blackwell — crashes insidelibrtx.scenedb.plugin~50 ms after "app ready". Fix: downgrade to580.x. - You cannot downgrade through Ubuntu's apt repo on 25.10 — every
nvidia-driver-{570..590}package is a transitional alias to 595. Fix: add NVIDIA's CUDA repo (ubuntu2404variant) with apt pinning so it provides a real 580 driver. - The official desktop WebRTC client has no port input. For multi-instance you bind each instance to a different host IP alias.
- Multiple concurrent interactive WebRTC sessions on one Isaac Sim host are not supported in 5.1.0 — landing in 6.0.
Hardware / OS Context
Component | Value |
CPU | AMD Ryzen Threadripper PRO 9995WX (96C/192T) |
GPU | 2× NVIDIA RTX PRO 6000 Blackwell Workstation Edition (sm_120, 97 GB) |
RAM | 125 GB |
OS | Ubuntu 25.10 (Questing), kernel 6.17.0-23 |
Docker | 29.4.3 |
nvidia-container-toolkit | 1.19.0 |
The Happy-Path Install (before any of this broke)
Isaac Sim runs as an NVIDIA-published Docker image from NGC. There's no native Linux installer for the streaming version. The full setup before our troubleshooting is the following four steps; they all "just work" until you actually try to render.
1. NVIDIA driver
Pick any working driver for the GPU. On Ubuntu 25.10 with Blackwell, the default
ubuntu-drivers autoinstall picks 595, which is what dropped us into this debugging
spiral — but on the happy path you only realise that later. We document the correct
choice (NVIDIA CUDA repo's 580.x) in Challenge 2 below; assume that's what you want.
nvidia-smi # confirm driver loaded and GPU(s) visible2. Docker Engine
Use Docker Engine from the official Docker apt repo (not the snap, not Docker Desktop —
both have known issues with --gpus):
Note: Docker'snoble(24.04) repo works fine on Ubuntu 25.10. There isn't aquestingline yet, and the engine itself is distro-agnostic.
3. NVIDIA Container Toolkit
This is what teaches Docker how to inject the host's NVIDIA driver into a container so
--gpus all works. Install from NVIDIA's repo (not Ubuntu's, which is often stale):
If this last command lists your GPU(s), the host is ready for Isaac Sim.
4. Pull and run Isaac Sim 5.1.0
The image is gated on EULA acceptance and lives at nvcr.io/nvidia/isaac-sim. No NGC
login is required for the public tags:
docker pull nvcr.io/nvidia/isaac-sim:5.1.0 # ~23 GB, takes a whileProvision per-instance cache directories (chowned to uid 1234, the in-container
isaac-sim user — getting this wrong means Kit can't persist its caches and re-downloads
the extension registry on every run):
mkdir -p ~/docker/isaac-sim
docker run --rm -v ~/docker/isaac-sim:/data alpine sh -c "
mkdir -p /data/cache/{kit,ov,pip,glcache,computecache} \\
/data/{logs,data,documents}
chown -R 1234:1234 /data
"First run, headless WebRTC streaming mode:
Notes on a few of these flags that matter:
-gpus all— exposes both Blackwells. Use-gpus device=Nto pin to one.-network=host— required for the streaming server to bind LAN-reachable ports without per-portpmappings. Has consequences (see Challenge 4).-shm-size=16g— Kit's renderer needs more/dev/shmthan Docker's 64 MB default, or you get cryptic Vulkan crashes.runheadless.sh— the 5.1.0 entry point. (Older docs referencerunheadless.webrtc.sh; that script no longer exists.)
Watch the log until you see:
Streaming server started.
[XX.Xs] Isaac Sim Full Streaming App is loaded.…then connect from another machine using NVIDIA's Isaac Sim WebRTC Streaming Client (Windows / Mac / Linux). Enter the host's LAN IP, click Connect.
That's the install. Now for everything that breaks on this hardware.
Challenge 1 — Segfault inside librtx.scenedb.plugin
docker run nvcr.io/nvidia/isaac-sim:5.1.0 ./runheadless.sh -v reaches app ready,
emits Streaming server started., then crashes within 50 ms:
[Fatal] [carb.crashreporter-breakpad.plugin] Crash detected in pid 394 thread 590
000: libc.so.6!__sigaction+0x50
001: librtx.scenedb.plugin.so!...+0x123ef
...
006: libcarb.scenerenderer-rtx.plugin.so!...
Segmentation fault (core dumped)The first render command (SetLightingMenuModeCommand) — i.e. setting up the empty stage's
default lights — kills the renderer.
Root cause: a regression in NVIDIA's 595.x Linux driver branch on Blackwell sm_120. NVIDIA forum thread has the staff confirmation. Identical trace on RTX 5060 Ti and RTX 5070 Ti. Has nothing to do with Isaac Sim version, IOMMU, or container config.
What does NOT fix it: aftermath disabled, single-GPU pinning (--gpus device=0),
multi-GPU disabled, scene-DB tweaks, fresh shader/asset caches. We tried all of them.
Both 4.5.0 and 5.1.0 images crash the same way (4.5.0 also additionally lacks Blackwell
support in iray, separately).
Fix: downgrade to driver 580.x — see Challenge 2.
Challenge 2 — Ubuntu 25.10 has no real pre-595 driver
Naive plan: apt purge nvidia-driver-595-open then apt install nvidia-driver-590-open.
What actually happens:
nvidia-driver-590-open : Depends: nvidia-driver-595-open but it is not installable
libnvidia-compute-590 : Depends: libnvidia-compute-595 but it is not installable
nvidia-dkms-590-open : Depends: nvidia-dkms-595-open
xserver-xorg-video-nvidia-590 : Depends: xserver-xorg-video-nvidia-595Every "older" nvidia-driver-* package in Ubuntu 25.10's multiverse is a transitional
stub that depends on 595. Installing them puts 595 back. Even with apt-pin priorities
blocking 595, the 590 metapackage is then unsatisfiable and the install aborts. There is
no real 580 / 590 / 575 / 570 driver in 25.10's apt repos — they were all collapsed into
forwarders for 595.
Fix: bypass Ubuntu's repo entirely; use NVIDIA's CUDA apt repo (the ubuntu2404
variant — works on 25.10 because the driver stack is self-contained):
After reboot: nvidia-smi reports Driver Version: 580.159.03. Isaac Sim no longer
crashes. Keep /etc/apt/preferences.d/nvidia-prefer-cuda-repo.pref in place forever —
without it, the next apt upgrade silently puts 595 back.
Two gotchas during the downgrade
ubuntu-drivers-commonre-pulls 595 via post-install hook. When you install anynvidia-driver-*, the package's postinst callsubuntu-drivers autoinstallwhich inspects the GPU PCI ID and re-installs whatever Ubuntu thinks is "recommended" — which is 595 on 25.10. The apt pin stops it cleanly.- DKMS only registers one
nvidiasource per kernel. If you install 590 and 595 simultaneously (without the pin), DKMS builds whichever postinst runs last; the other is silently ignored. Symptom: packagenvidia-driver-590-open 590.48.01shows installed butdkms statussaysnvidia/595.58.03. Wipe/var/lib/dkms/nvidiabetween attempts.
Challenge 3 — Streaming works, but only on default port
With Isaac Sim launching successfully, a single client connects fine via NVIDIA's
"Isaac Sim WebRTC Streaming Client" desktop app: enter LAN IP, port stays at the hardcoded
8011. Works.
To support a second user on the second GPU we wrote two independent launchers
(~/isaac-sim-gpu0.sh, ~/isaac-sim-gpu1.sh), each pinning a Kit instance to one GPU
with isolated cache dirs under ~/docker/isaac-sim-instances/<name>/ (chowned to uid
1234, the in-container isaac-sim user). Each launcher overrides:
--/exts/omni.services.transport.server.http/host=$SIG_HOST # bind HTTP signaling to a specific IP
--/exts/omni.services.transport.server.http/port=$SIG_PORT
--/app/livestream/port=$WEBRTC_PORT # 49100 vs 49101The client app has no port field — it defaults to :8011 for HTTP signaling and the
real WebRTC media on :49100. Solution: a host IP alias so each instance lives on a
different IP using the same default ports.
sudo ip addr add 192.168.88.128/24 dev enp209s0f0np0User 1 enters 192.168.88.27, user 2 enters 192.168.88.128. The HTTP signaling can be
bound per-IP via the Kit setting above. The WebRTC media port (49100) cannot be
pinned to a specific IP through any documented Kit setting (/app/livestream/host does
not exist; we grepped the binaries), so we used iptables NAT to redirect:
sudo iptables -t nat -A PREROUTING -d 192.168.88.128 -p tcp --dport 49100 \\
-j REDIRECT --to-port 49101Now any TCP packet hitting 192.168.88.128:49100 gets retargeted to gpu1's actual port
49101 before socket dispatch.
Note on IP selection: our first attempt used192.168.88.28. Linux's duplicate-address detection accepted it because the conflicting device on the LAN didn't respond to ICMP, but ARP requests from outside still resolved.28to that other device's MAC. Lesson: for a transient secondary IP, pick something well away from the DHCP pool and confirm it witharp-scan --localnetor by checking your router's DHCP leases. We moved to.128.
Challenge 4 — Two simultaneous interactive sessions
Two clients, one per IP, connecting at the same time: the second one fails with
[Warning] [carb.livestream-rtc.plugin] onClientEventRaised: NVST_CCE_DISCONNECTED
when m_connectionCount 0 != 1The signaling headers reach gpu1 (proving the iptables redirect works) but the streaming
SDK rejects the session before any UDP media setup. The binary contains the constant
NVST_DISCONN_MAX_CONCURRENT_SESSION_LIMIT — there is a host-level concurrency check.
GitHub Discussion #449 confirms this is a documented limitation of Isaac Sim 5.1.0:
"Custom ports is not available in Isaac Sim 5.1.0 but is available in 6.0. There can be only one primary user stream to control the UI but there can be multiple non-interactive streams as view-only."
So:
- 5.1.0 supports one interactive WebRTC client per host, period.
- View-only secondary streams are possible but not what we want.
- Multi-instance custom-port support arrives in 6.0.
Tried-and-didn't-help, in case anyone reads this and is tempted:
Attempt | Why it failed |
Disjoint UDP minHostPort/maxHostPort per instance | Host-level concurrency gate trips before UDP setup |
publicEndpointAddress / publicEndpointPort per instance | Caused mismatched self-validation; broke even single-client mode |
Disabling omni.services.livestream.nvcf extension | Limit lives in NvStreamServer.so, not NVCF |
Today the workable answer is "one active user at a time," with two prepared but otherwise idle instances ready to take the session. We left both launchers + the IP alias + iptables rule in place so we can swap users without restart latency, but the practical concurrency cap is 1.
Final Configuration
Files on the host:
Path | Purpose |
/etc/apt/preferences.d/nvidia-prefer-cuda-repo.pref | Locks driver to NVIDIA CUDA repo's 580.x; do not delete |
~/isaac-sim-gpu0.sh | Launches Isaac Sim instance on GPU 0 ( isaac-sim-gpu0 container, port 8011 on 192.168.88.27) |
~/isaac-sim-gpu1.sh | Launches instance on GPU 1 ( isaac-sim-gpu1, port 8011 on 192.168.88.128 via iptables) |
~/docker/isaac-sim-instances/{gpu0,gpu1}/ | Per-instance Kit caches (chowned 1234:1234) |
Transient (lost on reboot — needs re-applying):
sudo ip addr add 192.168.88.128/24 dev enp209s0f0np0
sudo iptables -t nat -A PREROUTING -d 192.168.88.128 -p tcp --dport 49100 \\
-j REDIRECT --to-port 49101Day-to-day usage:
# Bring an instance up:
~/isaac-sim-gpu0.sh # user → <http://192.168.88.27> in WebRTC client
~/isaac-sim-gpu1.sh # user → <http://192.168.88.128> in WebRTC client
# Watch:
docker logs -f isaac-sim-gpu0
# Stop:
docker stop isaac-sim-gpu0Lessons Learned
- Don't trust apt version numbers on a fresh distro release.
nvidia-driver-590-openon 25.10 advertises version590.48.01but ships 595's binaries. Always check the actualdkms statusandmodinfo nvidia | grep version:before rebooting. - Ubuntu's
ubuntu-drivers-commonautoinstaller fights you when you try to use a non-default driver. apt pin priority1is the cleanest way to silence it without uninstalling the autoinstaller itself. -network=hostis convenient but expensive. Sharing the host UTS / port namespace is what makes most of this hard (single hostname for SDK session keying, port-49100 contention). Bridge networking would let each container have its own host identity but complicates WebRTC ICE traversal — a tradeoff for 6.0./v1/streaming/readyis your most useful debugging endpoint. It returns"Ready for connection"vs"Streaming session active (keep alive)"— perfect for proving which Isaac Sim instance a client actually reached, regardless of which IP they typed.