This allows the host to provide a 32-bit integer that will be sent
in the data of the ENet connect event, similar to X-SS-Ping-Payload
for video and audio.
The host can use this data to uniquely identify a client when IP
addresses are not stable across the various separate connections,
such as when the client is behind a Carrier-Grade NAT.
We currently scale bitrate based on both remote vs local, SDR vs HDR, and H.264 vs
HEVC vs AV1. This has led to a lot of user confusion wondering why the bitrate
doesn't seem to match their selection in some configurations.
In H.264 local streams, we will currently overshoot the selected bitrate by about
20% due to FEC, while remote streams will be right around the selected bitrate due
to remote-specific FEC bitrate adjustments.
HEVC and AV1 streams (as configured by most clients) basically behave similarly
between local and remote, since the codec bitrate adjustment factor of 75% is nearly
the same as the FEC bitrate adjustment factor of 80%. However, this adjustment was
only performed for SDR streams so local HDR streams would overshoot like H.264.
This change cleans up all this mess by using a single non-codec-specific video
bitrate adjustment for FEC in all cases. It also allows Sunshine to perform the FEC
adjustment on its end if the default FEC value of 20% has been overridden by the
user or if we implement dynamic FEC support in the future.
The net result is HEVC and AV1 SDR streams will only see a tiny bitrate increase,
but HDR and H.264 may see noticable 20% bitrate reductions that may require the
user to adjust their bitrate setting to reach the effective value they got before.
However, the new behavior should be more intuitive for users going forward since
changing codecs, using a VPN, or enabling HDR won't cause significant changes to the
video bitrate.
This causes problems on multi-homed GFE and Sunshine (prior to v0.21) hosts
because audio and video data can be sent back from an address different than
the one we used as our original destination address.
This reverts commit c13f4a323fc5d25bf9e1d18ac8166d6b5fad22b8.
This means we can ensure a consistent local address for our outgoing PING
traffic to keep the UDP flows alive without having to call connect() which breaks
with multi-homed hosts on GFE and Sunshine v0.20 and earlier.
The batch delay will basically always trigger for motion reports that contain both accel and gyro
data (like PS4/PS5 controllers) because we will report accel and gyro data within 1 ms every time.
This gets even worse when multiple controllers are connected and reporting motion data.
There shouldn't be any real need to batch these because they're unreliable sequenced packets,
so we can just remove the delay.
The old method was too inflexible (depending on consecutive events to batch) that it couldn't
really handle stressful cases like high polling rate mice combined with multiple gamepads
reporting motion sensor events.
It's possible for a sequence of packets seqnums like 2, 0, 1, 2 to end up hitting the fast path
again after filling in gap of OOS packets. The fast path assumes duplicate packets are caught
by the next contiguous sequence number check, but this is only true if that value is kept updated.
Since the slow path doesn't update the next contiguous sequence number, it's no longer safe to
use the fast path for the remaining packets in the frame because duplicate packets would be
mishandled.
Queuing duplicate packets violates all sorts of queue assumptions and can cause us to return
corrupt data to the decoder or attempt an FEC recovery without enough shards.
Time spent processing client callbacks would contribute to increased control stream latency,
which means higher RTTs, more lengthy waits before retransmissions, and higher client-reported
network latency stats.
Since we always allocate fixed size these aren't likely to be exploitable,
but we ought to fix them anyway. Worst case, we will just read some garbage
and generate corrupt video output.