143 Commits

Author SHA1 Message Date
RustDesk
bd7bc52c43 Update build.yaml 2026-01-13 10:26:46 +08:00
rustdesk
34cbd57e71 bump version 2026-01-13 10:20:40 +08:00
rustdesk
46e9ecd47d fix pr 2026-01-12 17:28:25 +08:00
rustdesk
d0770e6c06 high rustc 2026-01-12 15:08:01 +08:00
rustdesk
f87372ead0 GHCR manifest 2026-01-12 14:59:45 +08:00
RustDesk
04efb7033c Update README.md 2026-01-12 14:54:50 +08:00
RustDesk
9bae9f2f39 Update build.yaml 2026-01-12 13:28:04 +08:00
rustdesk
7c2057c360 remove annoying fmt check 2026-01-10 16:33:46 +08:00
rustdesk
f61b31f04e 1.1.15 debain log 2026-01-10 16:07:16 +08:00
rustdesk
bb455a3eb7 merge ghcr to build.yml 2026-01-10 15:51:06 +08:00
RustDesk
631618a6be Update build.yaml 2026-01-10 14:47:34 +08:00
rustdesk
34946219c3 update workflow 2026-01-09 19:12:49 +08:00
rustdesk
5d1b4ba78a connection log query 2026-01-06 23:59:47 +08:00
Nawer
7e921be539 chore: Added kubernetes example file (#623) 2026-01-06 17:58:33 +08:00
lrin
716041bac1 libs: Fix hbb_common Submodule (#553)
The hbb_common Submodule was accidentally deleted in c650217919
2025-11-22 15:24:50 +08:00
Guiorgy
6b72cefbc5 define the HOME env to allow running rootless (#612) 2025-11-22 15:20:15 +08:00
Geert Stappers
386c83874c debian/changelog more like the first two (#573)
* debian/changelog more like the first two

Added "Who and When" lines, added empty lines as separator.
The time stamps where retrieved from the git commit log.

All entries look now like:

rustdesk-server (1.1.7) UNRELEASED; urgency=medium

  * ipv6 support

 -- rustdesk <info@rustdesk.com>  Wed, 11 Jan 2023 11:27:00 +0800

rustdesk-server (1.1.6) UNRELEASED; urgency=medium

  * Initial release

 -- open-trade <info@rustdesk.com>  Fri, 15 Jul 2022 12:27:27 +0200

* debian/changelog: reformat a date stamp

The "wrong format" was discovered by Lintian.
2025-11-03 13:57:06 +08:00
Geert Stappers
5cc309dc9f new file: debian/README.source (#592)
Describes how to build
2025-11-03 13:56:00 +08:00
rustdesk
4e8d6d08e2 fix https://github.com/rustdesk/rustdesk-server/issues/435 2025-11-03 13:03:19 +08:00
c8a5d41906 chore: Update to Windows 2022 runner image (#568) 2025-07-13 07:33:30 +08:00
rustdesk
2c57e51ec5 higher default bandwidth 2025-06-22 15:03:13 +09:00
RustDesk
796a75af5c Update README.md 2025-06-16 14:46:51 +08:00
rustdesk
f09b79c6da README-ZH.md 2025-06-16 00:23:50 +08:00
rustdesk
1e84478837 simplify doc 2025-06-16 00:23:33 +08:00
RustDesk
2417c7ddd0 Update build.yaml 2025-05-11 20:54:24 +08:00
bato3
235a3c326c In cargo it is named "rustdesk-utils" (#551) 2025-04-29 20:40:55 +08:00
Guoiaoang
e4c85764f9 fix translation errors (#545) 2025-04-19 09:48:05 +08:00
dependabot[bot]
c650217919 Git submodule: Bump libs/hbb_common from 7cf11f7 to 83419b6 (#524)
Bumps [libs/hbb_common](https://github.com/rustdesk/hbb_common) from `7cf11f7` to `83419b6`.
- [Commits](7cf11f7b77...83419b6549)

---
updated-dependencies:
- dependency-name: libs/hbb_common
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-07 09:06:09 +08:00
dependabot[bot]
262a1d49f8 Git submodule: Bump libs/hbb_common from 49c6b24 to 7cf11f7 (#523)
Bumps [libs/hbb_common](https://github.com/rustdesk/hbb_common) from `49c6b24` to `7cf11f7`.
- [Commits](49c6b24a7a...7cf11f7b77)

---
updated-dependencies:
- dependency-name: libs/hbb_common
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-24 22:45:13 +08:00
XLion
f0a6c3de0b Create dependabot.yml (#522) 2025-02-24 22:42:23 +08:00
621da3c8fe fix: 127.0.0.1 is not loopback (#515) 2025-02-06 20:25:10 +08:00
RustDesk
801826f36d Update build.yaml 2025-01-31 15:11:28 +08:00
rustdesk
8557c4ab82 freebsd still not work 2025-01-25 20:47:19 +08:00
rustdesk
1fd1a2a3b6 1.1.14 2025-01-25 20:37:16 +08:00
21pages
bcdfda43af update dep (#507)
Signed-off-by: 21pages <sunboeasy@gmail.com>
2025-01-22 20:27:25 +08:00
XLion
11a1f99169 Fix test error (#505) 2025-01-21 23:53:23 +08:00
21pages
464a687043 fix windows crash (#504)
Signed-off-by: 21pages <sunboeasy@gmail.com>
2025-01-21 17:03:45 +08:00
RustDesk
66c78c23fe Update build.yaml 2025-01-21 01:33:42 +08:00
rustdesk
e251161c5d rust 1.81 2025-01-21 01:21:13 +08:00
rustdesk
b9e5968299 1.1.13 2025-01-21 01:09:21 +08:00
21pages
7a509f6975 replace libs/hbb_common with submodule (#502)
cargo update -p schannel to fix crash on higher rust toolchain, https://github.com/seanmonstar/reqwest/issues/2311

Signed-off-by: 21pages <sunboeasy@gmail.com>
2025-01-20 17:34:22 +08:00
Integral
772db7422f refactor: replace static with const for global constants (#494) 2024-12-07 17:54:53 +08:00
XLion
2f246537df README: Restructure Container expression; add ghcr; multiple tidy up (#479)
* Update README.md

* make hbbs first everywhere

* Update README.md

* Fix link

* dockerhub to Docker Hub; Suggest user use ghcr if can't access Docker Hub

* Add `

* Add Debian 12
2024-12-02 21:14:56 +08:00
XLion
4c74586ce0 Borrow Cargo.toml's profile.release from RustDesk for better binary (#481) 2024-10-14 11:00:38 +08:00
XLion
2ac3169d77 Don't test with editing README (#480) 2024-10-12 18:14:37 +08:00
rustdesk
6f18a97644 v1.1.12 2024-10-07 16:21:36 +08:00
XLion
3b386b6b54 feat: Publish container images to GitHub ghcr.io (#473)
* ghcr

* fix name

* ghcr

* ghcr

* ghcr

* ghcr

* ghcr

* ghcr update action

* ghcr update action

* ghcr update action

* ghcr update action

* ghcr update action

* ghcr update action

* ghcr classic

* ghcr classic

* ghcr: better naming and tidy up

* ghcr classic fix chmod

* tidy up

* tidy up

* if-no-files-found: error
2024-09-30 08:46:35 +08:00
Fionera
b37033d92c docs: the servers are by comma instead of colon (#462) 2024-09-28 23:34:47 +08:00
XLion
b7bab80bfe Bump S6 overlay and fix env warnings (#472) 2024-09-28 23:33:47 +08:00
Dominik Hassler
041a603173 add illumos support (#433) 2024-06-29 22:25:47 +08:00
rustdesk
e40994d62e remove useless KEY_FOR_API 2024-05-26 21:43:13 +08:00
rustdesk
5078a1f797 reuse port, and revert hbbr -k 2024-05-24 18:37:11 +08:00
rustdesk
a22dacce0c fix ci 2024-05-24 18:09:12 +08:00
rustdesk
064c9e4bb4 fix ci 2024-05-24 18:02:39 +08:00
rustdesk
3cf0f6560f bump to 1.1.11 2024-05-24 17:59:53 +08:00
rustdesk
c4c26dd6d7 change -k to default _ 2024-05-24 17:57:37 +08:00
r00t
4240c47244 Add Simplified Chinese Readme File. (#409) 2024-04-24 19:04:02 +08:00
writegr
6e91f41a10 chore: fix some typos in comments (#404)
Signed-off-by: writegr <wellweek@outlook.com>
2024-04-18 14:39:10 +08:00
3x3cut0r
1a7cee157c Update README.md (#389)
RELAY_SERVERS was renamed RELAY, as this variable does not exist
2024-03-15 21:30:12 +08:00
dforel
72641270f1 Update README.md (#388)
* Update README.md

增加一点可能出错的提示

* Update README.md

* Update README.md

---------

Co-authored-by: RustDesk <71636191+rustdesk@users.noreply.github.com>
2024-03-15 10:22:21 +08:00
tschettervictor
19f8d3a0f4 Add description of relay server variable for clarity (#378)
Since the "rustdesk_hbbs_ip" is not where the server should listen on, but actually the relay server to use, I added a description that will clarify this
2024-02-27 21:44:56 +08:00
XLion
bac9548f86 Add README for Traditional Chinese (#364)
* Add README-TW.md

* Add "繁體中文" hyperlink to README.md

* Add English hyperlink to README-TW.md

* Add "繁體中文" hyperlink to README-NL.md

* Add "繁體中文" hyperlink to README-DE.md
2024-02-16 10:48:19 +08:00
rustdesk
79f0eb497b trim private key 2024-01-31 11:30:42 +08:00
Paolo Asperti
94ae51458c fix Pk size check (#361)
* more descriptive error

* fix key size check
2024-01-31 11:21:00 +08:00
rustdesk
778c89efb1 bump to 1.1.10-3 2024-01-31 11:20:04 +08:00
rustdesk
a7a0fa7cb5 bump to 1.1.10-2 2024-01-30 19:23:36 +08:00
RustDesk
2d8f6ae4f4 Update changelog 2024-01-30 19:16:44 +08:00
RustDesk
324dfd6a1f fix https://github.com/rustdesk/rustdesk-server/issues/306 2024-01-30 19:02:30 +08:00
RustDesk
70242e6eb2 Update common.rs 2024-01-30 18:29:04 +08:00
RustDesk
42cdfb0885 Merge pull request #348 from paspo/pk-check
private key size check
2024-01-30 18:24:04 +08:00
paspo
cea8403dbc private key size check 2024-01-30 11:18:43 +01:00
RustDesk
0ebfc09f8b Merge pull request #333 from ledeuns/patch-1
Update mac_address
2023-12-25 08:49:52 +08:00
Denis Fondras
2e06125974 Update mac_address
The latest version of mac_address allows to compile rustdesk-server on OpenBSD
2023-12-24 18:21:51 +01:00
RustDesk
fc775102ff Delete .github/workflows/test-selfhosted.yaml 2023-12-22 20:03:30 +08:00
RustDesk
891f388040 Merge pull request #328 from paspo/slim-docker-classic
minimal docker classic images
2023-12-07 17:09:11 +08:00
paspo
8b7f3491b1 minimal docker classic images 2023-12-07 08:15:00 +01:00
rustdesk
27ac9dec56 remove docker again 2023-12-06 02:05:33 +08:00
rustdesk
acf2c6d787 try archlinux/archlinux:base-devel 2023-12-06 01:19:57 +08:00
rustdesk
5f137710be do not use docker for the runner 2023-12-06 00:33:52 +08:00
rustdesk
1a6016f08f specify image 2023-12-06 00:28:59 +08:00
rustdesk
f67e8991ef test self-hosted runner 2023-12-05 23:43:41 +08:00
rustdesk
d81010224d remove sign 2023-12-05 17:22:40 +08:00
rustdesk
4c27143125 bump 1.1.9 2023-12-05 17:09:07 +08:00
rustdesk
9461bbe8f3 remove unused 2023-12-01 11:33:16 +08:00
rustdesk
1142cf105b Fix #324 to remove unsafe 2023-12-01 11:32:07 +08:00
RustDesk
5133af1863 Merge pull request #320 from tschettervictor/patch-1
Update rustdesk-hbbs
2023-11-17 11:44:29 +08:00
tschettervictor
7c7d554609 Update rustdesk-hbbs
typo
2023-11-16 18:27:01 -07:00
RustDesk
272a094fde Delete ask-a-question.md 2023-08-08 10:18:04 +08:00
RustDesk
b33d7954be Merge pull request #292 from dinger1986/master
Update README.md
2023-08-04 16:43:06 +08:00
dinger1986
f519be8e92 Update README.md 2023-08-04 09:23:02 +01:00
RustDesk
33331be361 Merge pull request #289 from madpilot78/FreeBSD_rc_scripts_fixes
FreeBSD rc scripts fixes
2023-07-25 13:24:21 +08:00
Guido Falsi
04a9d307c5 Change chdir location to /var/db, according to hier(7).
BSD systems do not really have a /var/lib directory, although it happens to exist on live systems because some installed software creates it.
2023-07-24 17:06:23 +02:00
Guido Falsi
35c2386c98 Remove unneeded quotes. 2023-07-24 17:05:57 +02:00
Guido Falsi
4f41300450 Fix comments. 2023-07-24 17:01:34 +02:00
rustdesk
d6f99ed9a2 fix postrm 2023-07-22 22:06:51 +08:00
rustdesk
95fe0e5a78 fix https://github.com/rustdesk/rustdesk-server/issues/286 2023-07-22 21:57:03 +08:00
rustdesk
b713303c15 fix naming 2023-07-06 00:50:11 +08:00
rustdesk
d8f88e72c1 ubuntu22.04 has no i386 2023-07-06 00:16:01 +08:00
rustdesk
d3a459542d make classic also multiarch 2023-07-05 23:45:50 +08:00
rustdesk
afeebe852d github ci does not support 18.04 anymore 2023-06-29 10:41:39 +08:00
rustdesk
f1e941bf9f fix is_loopback 2023-06-16 15:13:23 +08:00
RustDesk
c871978475 Update setup.nsi 2023-06-13 10:04:47 +08:00
rustdesk
411502cd0b https://github.com/rustdesk/rustdesk-server/issues/260 2023-06-08 20:02:30 +08:00
rustdesk
243fb1fb06 more version fix 2023-06-08 19:19:02 +08:00
rustdesk
9657dcf596 release tag 2023-06-08 19:19:02 +08:00
RustDesk
946845cd01 rust 1.70 2023-06-08 16:18:00 +08:00
rustdesk
fd1c21b114 fmt 2023-06-08 14:11:37 +08:00
RustDesk
d8e3cb9e65 Merge pull request #249 from nsgundy/FixNoDirectConnectionWhenBothPeersOnLan
Fix no direct connection when both peers on LAN
2023-06-08 13:53:09 +08:00
rustdesk
3a7904fa8e fix test_hbb and bump version 1.1.8 2023-06-08 13:42:34 +08:00
nsgundy
85a20769fb Consider peers to be on same intranet if is_lan() returns true for both 2023-05-19 15:21:59 +00:00
nsgundy
aeeca0d7d1 Fix ip4 mapped ip6 addresses not considered to be part of network 2023-05-19 15:21:20 +00:00
RustDesk
482d7fb8cc Merge pull request #247 from Mr-Update/master
Create README-DE.md
2023-05-09 19:23:18 +08:00
Mr-Update
c291900e37 Update README.md 2023-04-27 23:40:31 +02:00
Mr-Update
089352420f Update README-NL.md 2023-04-27 23:40:04 +02:00
Mr-Update
1addf8c9eb Create README-DE.md 2023-04-27 23:39:30 +02:00
RustDesk
1f7d3fa05c Merge pull request #236 from bahdotsh/bahdotsh/language-improvement
improved language and corrected spelling mistakes in README.md
2023-04-03 15:37:22 +08:00
bahdotsh
c3b6e1351f improved language and corrected spelling mistakes in README.md 2023-04-03 13:04:41 +05:30
RustDesk
336b281657 Merge pull request #235 from n-connect/master
Core dump fix and rc.d scripts optimisation for FreeBSD
2023-04-01 17:31:58 +08:00
n-connect
57898642e8 Variable optimisation for hbbr rc.d service
Variable based IP definition, can be set from /etc/rc.conf
2023-04-01 11:19:43 +02:00
n-connect
fa0c006ac5 Variable optimisation for hbbs rc.d service
Variable based IP definition, can be set from /etc/rc.conf
2023-04-01 11:18:44 +02:00
n-connect
178ff59623 Fix for core dumps on FreeBSD
Flexi_logger options for async writemode https://github.com/rustdesk/rustdesk-server/pull/232#issuecomment-1491347232
2023-04-01 11:14:51 +02:00
RustDesk
7bbed69ad2 Merge pull request #232 from n-connect/master
U 20.04 for binary build & release
2023-03-30 00:43:49 +08:00
n-connect
d4b6e6ee28 U 20.04 for binary build & release
Build and GitHub release to 20.04
2023-03-29 18:40:33 +02:00
RustDesk
b3fbb9e179 Merge pull request #231 from n-connect/master
Move from Ubuntu 18.04 VM (Github Actions)
2023-03-29 22:45:38 +08:00
n-connect
b44eb1bbc3 Linux toolchain to 1.67.1
Changes  of - Ubuntu boxes rolled back to 18.04 where were applicable.

Linux toolchain from 1.62 -> 1.67.1, Windows toolchain untouched (1.62)
2023-03-29 16:43:02 +02:00
n-connect
d88286642e Move from Ubuntu 18.04
Base on https://github.com/nextcloud/notify_push/releases, the 18.04 will be deprecated. Also moving toolchain to 1.68 to hopefully fix FreeBSD build core dump issue.
2023-03-29 14:28:51 +02:00
RustDesk
74ff886900 Merge pull request #225 from n-connect/master
Adding log capability over syslog
2023-03-28 18:38:24 +08:00
n-connect
9e716b3b7b Minor update 2023-03-27 15:59:06 +02:00
n-connect
bea99ae315 Adding log capability over syslog
Logging over syslog added via a semi-duplicate line commented out by default. Instructions in the line above.
2023-03-25 22:30:59 +01:00
n-connect
6068b5941c Adding log capability over syslog
Logging over syslog added via a semi-duplicate line commented out by default. Instructions in the line above.
2023-03-25 22:29:02 +01:00
rustdesk
675bf3c1f5 fix command line buffer and test addr 2023-03-16 00:53:58 +08:00
RustDesk
dc81956d42 Merge pull request #219 from FastAct/master
Create README-NL.md
2023-03-15 19:33:42 +08:00
FastAct
ffe736be17 Create README-NL.md
Add Dutch translation
2023-03-15 12:21:01 +01:00
RustDesk
b83dae4cc4 Merge pull request #212 from n-connect/patch-1
Update build.yaml - adding FreeBSD build
2023-03-06 08:36:49 +08:00
n-connect
8d9203ecdb Update build.yaml - adding FreeBSD build 2023-03-05 19:44:56 +01:00
RustDesk
13321b5a90 Merge pull request #210 from n-connect/master
Log files enables
2023-03-03 10:32:00 +08:00
n-connect
0a8c39c11f hbbs logging to file
Logging enabled via file redirection (not syslog, as it can't tell/pass the logger program's name)
2023-03-03 01:11:26 +01:00
n-connect
7dd812c79a hbbr logging to file
Logging enabled via file redirection (not syslog, as it can't tell/pass the logger program's name)
2023-03-03 01:10:09 +01:00
RustDesk
9d524443ec Merge pull request #208 from n-connect/master
FreeBSD rcd scripts for hbbs & hbbr
2023-03-02 21:39:32 +08:00
n-connect
35a192a478 Create rustdesk-hbbs
FreeBSD rcd script running hbbs as service. Service user, group, pid, running directory handled. IP address of the -r option need to be changed manually.
2023-03-02 13:47:07 +01:00
n-connect
2f4235a968 Create rustdesk-hbbr
FreeBSD rcd script running hbbr as service. Service user, group, pid, running directory handled.
2023-03-02 13:44:13 +01:00
RustDesk
12b57238d2 Merge pull request #207 from n-connect/master
Update Cargo.toml for FreeBSD build
2023-03-02 20:31:30 +08:00
n-connect
fe805e8554 Update Cargo.toml for FreeBSD build
Crate.io package local-ip-address from v0.5+ is Freebsd compatible, eg. it compiles and works. Simply changing the version to v0.5.1 in the Cargo.toml was possible to make a successful release build on FreeBSD 13 with prepackaged Rust 1.67.1.
2023-03-02 13:19:10 +01:00
50 changed files with 2280 additions and 6125 deletions

View File

@@ -1,14 +0,0 @@
---
name: Ask a question
about: Ask the community for help
title: ''
labels: 'question'
assignees: ''
---
This is the place for generic questions. Please stay on topic and be polite.
**Notes**
- Please write in english only. If you provide some images in different languages, you're required to write a translation in english.
- In any case, **NEVER** put here the content if your `id_ed25519` file

11
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,11 @@
version: 2
updates:
- package-ecosystem: "gitsubmodule"
directory: "/"
target-branch: "master"
schedule:
interval: "daily"
commit-message:
prefix: "Git submodule"
labels:
- "dependencies"

View File

@@ -4,8 +4,12 @@ name: build
# please setup some secrets before running this workflow:
# DOCKER_IMAGE should be the target image name on docker hub (e.g. "rustdesk/rustdesk-server-s6" )
# DOCKER_IMAGE_CLASSIC should be the target image name on docker hub for the old build (e.g. "rustdesk/rustdesk-server" )
# DOCKER_USERNAME is the username you normally use to login at https://hub.docker.com/
# DOCKER_PASSWORD is a token you should create under "account settings / security" with read/write access
# DOCKER_HUB_USERNAME is the username you normally use to login at https://hub.docker.com/
# DOCKER_HUB_PASSWORD is a token you should create under "account settings / security" with read/write access
permissions:
contents: read
packages: write
on:
workflow_dispatch:
@@ -19,6 +23,8 @@ on:
env:
CARGO_TERM_COLOR: always
LATEST_TAG: latest
GHCR_IMAGE: ghcr.io/rustdesk/rustdesk-server-s6
GHCR_IMAGE_CLASSIC: ghcr.io/rustdesk/rustdesk-server
jobs:
@@ -26,7 +32,7 @@ jobs:
build:
name: Build - ${{ matrix.job.name }}
runs-on: ubuntu-18.04
runs-on: ubuntu-22.04
strategy:
fail-fast: false
matrix:
@@ -35,16 +41,19 @@ jobs:
- { name: "arm64v8", target: "aarch64-unknown-linux-musl" }
- { name: "armv7", target: "armv7-unknown-linux-musleabihf" }
- { name: "i386", target: "i686-unknown-linux-musl" }
#- { name: "amd64fb", target: "x86_64-unknown-freebsd" }
steps:
- name: Checkout
uses: actions/checkout@v3
with:
submodules: recursive
- name: Install toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: "1.62"
toolchain: "1.90"
override: true
default: true
components: rustfmt
@@ -62,7 +71,7 @@ jobs:
run: chmod -v a+x target/${{ matrix.job.target }}/release/*
- name: Publish Artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: binaries-linux-${{ matrix.job.name }}
path: |
@@ -73,17 +82,19 @@ jobs:
build-win:
name: Build - windows
runs-on: windows-2019
runs-on: windows-2022
steps:
- name: Checkout
uses: actions/checkout@v3
with:
submodules: recursive
- name: Install toolchain
uses: actions-rs/toolchain@v1
with:
toolchain: "1.62"
toolchain: "1.90"
override: true
default: true
components: rustfmt
@@ -112,6 +123,7 @@ jobs:
- name: Sign exe files
uses: GermanBluefox/code-sign-action@v7
if: false
with:
certificate: '${{ secrets.WINDOWS_PFX_BASE64 }}'
password: '${{ secrets.WINDOWS_PFX_PASSWORD }}'
@@ -140,6 +152,7 @@ jobs:
- name: Sign UI setup file
uses: GermanBluefox/code-sign-action@v7
if: false
with:
certificate: '${{ secrets.WINDOWS_PFX_BASE64 }}'
password: '${{ secrets.WINDOWS_PFX_PASSWORD }}'
@@ -148,7 +161,7 @@ jobs:
recursive: false
- name: Publish Artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: binaries-windows-x86_64
path: |
@@ -165,21 +178,22 @@ jobs:
needs:
- build
- build-win
runs-on: ubuntu-18.04
runs-on: ubuntu-22.04
strategy:
fail-fast: false
matrix:
job:
- { os: "linux", name: "amd64" }
- { os: "linux", name: "arm64v8" }
- { os: "linux", name: "armv7" }
- { os: "linux", name: "i386" }
- { os: "windows", name: "x86_64" }
- { os: "linux", name: "amd64", suffix: "" }
- { os: "linux", name: "arm64v8", suffix: "" }
- { os: "linux", name: "armv7", suffix: "" }
- { os: "linux", name: "i386", suffix: "" }
#- { os: "linux", name: "amd64fb", suffix: "" }
- { os: "windows", name: "x86_64", suffix: "-unsigned" }
steps:
- name: Download binaries (${{ matrix.job.os }} - ${{ matrix.job.name }})
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: binaries-${{ matrix.job.os }}-${{ matrix.job.name }}
path: ${{ matrix.job.name }}
@@ -191,20 +205,20 @@ jobs:
run: |
sudo apt update
DEBIAN_FRONTEND=noninteractive sudo apt install -y zip
zip ${{ matrix.job.name }}/rustdesk-server-${{ matrix.job.os }}-${{ matrix.job.name }}.zip ${{ matrix.job.name }}/*
zip ${{ matrix.job.name }}/rustdesk-server-${{ matrix.job.os }}-${{ matrix.job.name }}${{ matrix.job.suffix }}.zip ${{ matrix.job.name }}/*
- name: Create Release (${{ matrix.job.os }} - (${{ matrix.job.name }})
uses: softprops/action-gh-release@v1
with:
draft: true
files: ${{ matrix.job.name }}/rustdesk-server-${{ matrix.job.os }}-${{ matrix.job.name }}.zip
files: ${{ matrix.job.name }}/rustdesk-server-${{ matrix.job.os }}-${{ matrix.job.name }}${{ matrix.job.suffix }}.zip
# docker build and push of single-arch images
docker:
name: Docker push - ${{ matrix.job.name }}
needs: build
runs-on: ubuntu-18.04
runs-on: ubuntu-22.04
strategy:
fail-fast: false
matrix:
@@ -218,9 +232,11 @@ jobs:
- name: Checkout
uses: actions/checkout@v3
with:
submodules: recursive
- name: Download binaries
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: binaries-linux-${{ matrix.job.name }}
path: docker/rootfs/usr/bin
@@ -238,8 +254,16 @@ jobs:
if: github.event_name != 'pull_request'
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_PASSWORD }}
- name: Log in to GHCR
if: github.event_name != 'pull_request'
uses: docker/login-action@v2
with:
registry: ghcr.io
username: rustdesk
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
@@ -256,17 +280,21 @@ jobs:
echo "MAJOR_TAG=$M" >> $GITHUB_ENV
- name: Build and push Docker image
uses: docker/build-push-action@v3
uses: docker/build-push-action@v5
with:
context: "./docker"
platforms: ${{ matrix.job.docker_platform }}
push: true
provenance: false
build-args: |
S6_ARCH=${{ matrix.job.s6_platform }}
tags: |
${{ secrets.DOCKER_IMAGE }}:${{ env.LATEST_TAG }}-${{ matrix.job.name }}
${{ secrets.DOCKER_IMAGE }}:${{ env.GIT_TAG }}-${{ matrix.job.name }}
${{ secrets.DOCKER_IMAGE }}:${{ env.MAJOR_TAG }}-${{ matrix.job.name }}
${{ env.GHCR_IMAGE }}:${{ env.LATEST_TAG }}-${{ matrix.job.name }}
${{ env.GHCR_IMAGE }}:${{ env.GIT_TAG }}-${{ matrix.job.name }}
${{ env.GHCR_IMAGE }}:${{ env.MAJOR_TAG }}-${{ matrix.job.name }}
labels: ${{ steps.meta.outputs.labels }}
# docker build and push of multiarch images
@@ -274,7 +302,7 @@ jobs:
name: Docker manifest
needs: docker
runs-on: ubuntu-18.04
runs-on: ubuntu-22.04
steps:
@@ -282,8 +310,8 @@ jobs:
if: github.event_name != 'pull_request'
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_PASSWORD }}
- name: Get git tag
id: vars
@@ -293,10 +321,18 @@ jobs:
echo "GIT_TAG=$T" >> $GITHUB_ENV
echo "MAJOR_TAG=$M" >> $GITHUB_ENV
- name: Log in to GHCR
if: github.event_name != 'pull_request'
uses: docker/login-action@v2
with:
registry: ghcr.io
username: rustdesk
password: ${{ secrets.GITHUB_TOKEN }}
# manifest for :1.2.3 tag
# this has to run only if invoked by a new tag
- name: Create and push manifest (:ve.rs.ion)
uses: Noelware/docker-manifest-action@master
uses: Noelware/docker-manifest-action@0.4.3
if: github.event_name != 'workflow_dispatch'
with:
base-image: ${{ secrets.DOCKER_IMAGE }}:${{ env.GIT_TAG }}
@@ -305,7 +341,7 @@ jobs:
# manifest for :1 tag (major release)
- name: Create and push manifest (:major)
uses: Noelware/docker-manifest-action@master
uses: Noelware/docker-manifest-action@0.4.3
with:
base-image: ${{ secrets.DOCKER_IMAGE }}:${{ env.MAJOR_TAG }}
extra-images: ${{ secrets.DOCKER_IMAGE }}:${{ env.MAJOR_TAG }}-amd64,${{ secrets.DOCKER_IMAGE }}:${{ env.MAJOR_TAG }}-arm64v8,${{ secrets.DOCKER_IMAGE }}:${{ env.MAJOR_TAG }}-armv7,${{ secrets.DOCKER_IMAGE }}:${{ env.MAJOR_TAG }}-i386
@@ -313,40 +349,68 @@ jobs:
# manifest for :latest tag
- name: Create and push manifest (:latest)
uses: Noelware/docker-manifest-action@master
uses: Noelware/docker-manifest-action@0.4.3
with:
base-image: ${{ secrets.DOCKER_IMAGE }}:${{ env.LATEST_TAG }}
extra-images: ${{ secrets.DOCKER_IMAGE }}:${{ env.LATEST_TAG }}-amd64,${{ secrets.DOCKER_IMAGE }}:${{ env.LATEST_TAG }}-arm64v8,${{ secrets.DOCKER_IMAGE }}:${{ env.LATEST_TAG }}-armv7,${{ secrets.DOCKER_IMAGE }}:${{ env.LATEST_TAG }}-i386
push: true
# GHCR manifests
# manifest for :1.2.3 tag
- name: Create and push GHCR manifest (:ve.rs.ion)
uses: Noelware/docker-manifest-action@0.4.3
if: github.event_name != 'workflow_dispatch'
with:
base-image: ${{ env.GHCR_IMAGE }}:${{ env.GIT_TAG }}
extra-images: ${{ env.GHCR_IMAGE }}:${{ env.GIT_TAG }}-amd64,${{ env.GHCR_IMAGE }}:${{ env.GIT_TAG }}-arm64v8,${{ env.GHCR_IMAGE }}:${{ env.GIT_TAG }}-armv7,${{ env.GHCR_IMAGE }}:${{ env.GIT_TAG }}-i386
push: true
# manifest for :1 tag (major release)
- name: Create and push GHCR manifest (:major)
uses: Noelware/docker-manifest-action@0.4.3
with:
base-image: ${{ env.GHCR_IMAGE }}:${{ env.MAJOR_TAG }}
extra-images: ${{ env.GHCR_IMAGE }}:${{ env.MAJOR_TAG }}-amd64,${{ env.GHCR_IMAGE }}:${{ env.MAJOR_TAG }}-arm64v8,${{ env.GHCR_IMAGE }}:${{ env.MAJOR_TAG }}-armv7,${{ env.GHCR_IMAGE }}:${{ env.MAJOR_TAG }}-i386
push: true
# manifest for :latest tag
- name: Create and push GHCR manifest (:latest)
uses: Noelware/docker-manifest-action@0.4.3
with:
base-image: ${{ env.GHCR_IMAGE }}:${{ env.LATEST_TAG }}
extra-images: ${{ env.GHCR_IMAGE }}:${{ env.LATEST_TAG }}-amd64,${{ env.GHCR_IMAGE }}:${{ env.LATEST_TAG }}-arm64v8,${{ env.GHCR_IMAGE }}:${{ env.LATEST_TAG }}-armv7,${{ env.GHCR_IMAGE }}:${{ env.LATEST_TAG }}-i386
push: true
# docker build and push of classic images
# docker build and push of single-arch images
docker-classic:
name: Docker push classic - ${{ matrix.job.name }}
name: Docker push - ${{ matrix.job.name }}
needs: build
runs-on: ubuntu-18.04
runs-on: ubuntu-22.04
strategy:
fail-fast: false
matrix:
job:
- { name: "amd64", docker_platform: "linux/amd64", tag: "latest" }
- { name: "arm64v8", docker_platform: "linux/arm64", tag: "latest-arm64v8" }
- { name: "amd64", docker_platform: "linux/amd64" }
- { name: "arm64v8", docker_platform: "linux/arm64" }
- { name: "armv7", docker_platform: "linux/arm/v7" }
steps:
- name: Checkout
uses: actions/checkout@v3
with:
submodules: recursive
- name: Download binaries
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: binaries-linux-${{ matrix.job.name }}
path: docker-classic/
- name: Make binaries executable
run: chmod -v a+x docker-classic/hbb*
run: chmod -v a+x docker-classic/*
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
@@ -358,8 +422,16 @@ jobs:
if: github.event_name != 'pull_request'
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_PASSWORD }}
- name: Log in to GHCR
if: github.event_name != 'pull_request'
uses: docker/login-action@v2
with:
registry: ghcr.io
username: rustdesk
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
@@ -367,16 +439,114 @@ jobs:
with:
images: registry.hub.docker.com/${{ secrets.DOCKER_IMAGE_CLASSIC }}
- name: Get git tag
id: vars
run: |
T=${GITHUB_REF#refs/*/}
M=${T%%.*}
echo "GIT_TAG=$T" >> $GITHUB_ENV
echo "MAJOR_TAG=$M" >> $GITHUB_ENV
- name: Build and push Docker image
uses: docker/build-push-action@v3
uses: docker/build-push-action@v5
with:
context: "./docker-classic"
platforms: ${{ matrix.job.docker_platform }}
push: true
provenance: false
tags: |
${{ secrets.DOCKER_IMAGE_CLASSIC }}:${{ matrix.job.tag }}
${{ secrets.DOCKER_IMAGE_CLASSIC }}:${{ env.LATEST_TAG }}-${{ matrix.job.name }}
${{ secrets.DOCKER_IMAGE_CLASSIC }}:${{ env.GIT_TAG }}-${{ matrix.job.name }}
${{ secrets.DOCKER_IMAGE_CLASSIC }}:${{ env.MAJOR_TAG }}-${{ matrix.job.name }}
${{ env.GHCR_IMAGE_CLASSIC }}:${{ env.LATEST_TAG }}-${{ matrix.job.name }}
${{ env.GHCR_IMAGE_CLASSIC }}:${{ env.GIT_TAG }}-${{ matrix.job.name }}
${{ env.GHCR_IMAGE_CLASSIC }}:${{ env.MAJOR_TAG }}-${{ matrix.job.name }}
labels: ${{ steps.meta.outputs.labels }}
# docker build and push of multiarch images
docker-manifest-classic:
name: Docker manifest
needs: docker
runs-on: ubuntu-22.04
steps:
- name: Log in to Docker Hub
if: github.event_name != 'pull_request'
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_PASSWORD }}
- name: Get git tag
id: vars
run: |
T=${GITHUB_REF#refs/*/}
M=${T%%.*}
echo "GIT_TAG=$T" >> $GITHUB_ENV
echo "MAJOR_TAG=$M" >> $GITHUB_ENV
- name: Log in to GHCR
if: github.event_name != 'pull_request'
uses: docker/login-action@v2
with:
registry: ghcr.io
username: rustdesk
password: ${{ secrets.GITHUB_TOKEN }}
# manifest for :1.2.3 tag
# this has to run only if invoked by a new tag
- name: Create and push manifest (:ve.rs.ion)
uses: Noelware/docker-manifest-action@0.4.3
if: github.event_name != 'workflow_dispatch'
with:
base-image: ${{ secrets.DOCKER_IMAGE_CLASSIC }}:${{ env.GIT_TAG }}
extra-images: ${{ secrets.DOCKER_IMAGE_CLASSIC }}:${{ env.GIT_TAG }}-amd64,${{ secrets.DOCKER_IMAGE_CLASSIC }}:${{ env.GIT_TAG }}-arm64v8,${{ secrets.DOCKER_IMAGE_CLASSIC }}:${{ env.GIT_TAG }}-armv7
push: true
# manifest for :1 tag (major release)
- name: Create and push manifest (:major)
uses: Noelware/docker-manifest-action@0.4.3
with:
base-image: ${{ secrets.DOCKER_IMAGE_CLASSIC }}:${{ env.MAJOR_TAG }}
extra-images: ${{ secrets.DOCKER_IMAGE_CLASSIC }}:${{ env.MAJOR_TAG }}-amd64,${{ secrets.DOCKER_IMAGE_CLASSIC }}:${{ env.MAJOR_TAG }}-arm64v8,${{ secrets.DOCKER_IMAGE_CLASSIC }}:${{ env.MAJOR_TAG }}-armv7
push: true
# manifest for :latest tag
- name: Create and push manifest (:latest)
uses: Noelware/docker-manifest-action@0.4.3
with:
base-image: ${{ secrets.DOCKER_IMAGE_CLASSIC }}:${{ env.LATEST_TAG }}
extra-images: ${{ secrets.DOCKER_IMAGE_CLASSIC }}:${{ env.LATEST_TAG }}-amd64,${{ secrets.DOCKER_IMAGE_CLASSIC }}:${{ env.LATEST_TAG }}-arm64v8,${{ secrets.DOCKER_IMAGE_CLASSIC }}:${{ env.LATEST_TAG }}-armv7
push: true
# GHCR manifests
# manifest for :1.2.3 tag
- name: Create and push GHCR manifest (:ve.rs.ion)
uses: Noelware/docker-manifest-action@0.4.3
if: github.event_name != 'workflow_dispatch'
with:
base-image: ${{ env.GHCR_IMAGE_CLASSIC }}:${{ env.GIT_TAG }}
extra-images: ${{ env.GHCR_IMAGE_CLASSIC }}:${{ env.GIT_TAG }}-amd64,${{ env.GHCR_IMAGE_CLASSIC }}:${{ env.GIT_TAG }}-arm64v8,${{ env.GHCR_IMAGE_CLASSIC }}:${{ env.GIT_TAG }}-armv7
push: true
# manifest for :1 tag (major release)
- name: Create and push GHCR manifest (:major)
uses: Noelware/docker-manifest-action@0.4.3
with:
base-image: ${{ env.GHCR_IMAGE_CLASSIC }}:${{ env.MAJOR_TAG }}
extra-images: ${{ env.GHCR_IMAGE_CLASSIC }}:${{ env.MAJOR_TAG }}-amd64,${{ env.GHCR_IMAGE_CLASSIC }}:${{ env.MAJOR_TAG }}-arm64v8,${{ env.GHCR_IMAGE_CLASSIC }}:${{ env.MAJOR_TAG }}-armv7
push: true
# manifest for :latest tag
- name: Create and push GHCR manifest (:latest)
uses: Noelware/docker-manifest-action@0.4.3
with:
base-image: ${{ env.GHCR_IMAGE_CLASSIC }}:${{ env.LATEST_TAG }}
extra-images: ${{ env.GHCR_IMAGE_CLASSIC }}:${{ env.LATEST_TAG }}-amd64,${{ env.GHCR_IMAGE_CLASSIC }}:${{ env.LATEST_TAG }}-arm64v8,${{ env.GHCR_IMAGE_CLASSIC }}:${{ env.LATEST_TAG }}-armv7
push: true
deb-package:
@@ -396,7 +566,9 @@ jobs:
- name: Checkout
uses: actions/checkout@v3
with:
submodules: recursive
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
@@ -407,7 +579,7 @@ jobs:
mkdir -p debian-build/${{ matrix.job.name }}/bin
- name: Download binaries
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: binaries-linux-${{ matrix.job.name }}
path: debian-build/${{ matrix.job.name }}/bin

View File

@@ -1,72 +0,0 @@
name: test
on:
push:
branches: [ "master" ]
pull_request:
branches: [ "master" ]
jobs:
check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
- uses: Swatinem/rust-cache@v2
- uses: actions-rs/cargo@v1
with:
command: check
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
- uses: Swatinem/rust-cache@v2
- uses: actions-rs/cargo@v1
with:
command: test
args: --all
fmt:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
components: rustfmt
- uses: Swatinem/rust-cache@v2
- uses: actions-rs/cargo@v1
with:
command: build
- uses: actions-rs/cargo@v1
with:
command: fmt
args: --all -- --check
clippy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
components: clippy
- uses: Swatinem/rust-cache@v2
- uses: actions-rs/cargo@v1
with:
command: clippy
args: --all -- -D warnings

1
.gitignore vendored
View File

@@ -9,3 +9,4 @@ debian/debhelper-build-stamp
src/version.rs
db_v2.sqlite3
test.*
.idea

3
.gitmodules vendored Normal file
View File

@@ -0,0 +1,3 @@
[submodule "libs/hbb_common"]
path = libs/hbb_common
url = https://github.com/rustdesk/hbb_common

1762
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
[package]
name = "hbbs"
version = "1.1.7-1"
version = "1.1.15"
authors = ["rustdesk <info@rustdesk.com>"]
edition = "2021"
build = "build.rs"
@@ -26,7 +26,7 @@ clap = "2"
rust-ini = "0.18"
minreq = { version = "2.4", features = ["punycode"] }
machine-uid = "0.2"
mac_address = "1.1"
mac_address = "1.1.5"
whoami = "1.2"
base64 = "0.13"
axum = { version = "0.5", features = ["headers"] }
@@ -46,11 +46,19 @@ tungstenite = "0.17"
regex = "1.4"
tower-http = { version = "0.3", features = ["fs", "trace", "cors"] }
http = "0.2"
flexi_logger = { version = "0.22", features = ["async", "use_chrono_for_offset"] }
flexi_logger = { version = "0.22", features = ["async", "use_chrono_for_offset", "dont_minimize_extra_stacks"] }
ipnetwork = "0.20"
local-ip-address = "0.4"
local-ip-address = "0.5.1"
dns-lookup = "1.0.8"
ping = "0.4.0"
flate2 = "1.0"
[target.'cfg(any(target_os = "macos", target_os = "windows"))'.dependencies]
# https://github.com/rustdesk/rustdesk-server-pro/issues/189, using native-tls for better tls support
reqwest = { git = "https://github.com/rustdesk-org/reqwest", features = ["blocking", "socks", "json", "native-tls", "gzip"], default-features=false }
[target.'cfg(not(any(target_os = "macos", target_os = "windows")))'.dependencies]
reqwest = { git = "https://github.com/rustdesk-org/reqwest", features = ["blocking", "socks", "json", "rustls-tls", "rustls-tls-native-roots", "gzip"], default-features=false }
[build-dependencies]
hbb_common = { path = "libs/hbb_common" }
@@ -58,3 +66,13 @@ hbb_common = { path = "libs/hbb_common" }
[workspace]
members = ["libs/hbb_common"]
exclude = ["ui"]
#https://github.com/johnthagen/min-sized-rust
#https://doc.rust-lang.org/cargo/reference/profiles.html#default-profiles
[profile.release]
lto = true
codegen-units = 1
panic = 'abort'
strip = true
#opt-level = 'z' # only have smaller size after strip # Default is 3, better performance
#rpath = true # Not needed

314
README.md
View File

@@ -8,6 +8,8 @@
[**FAQ**](https://github.com/rustdesk/rustdesk/wiki/FAQ)
[**How to migrate OSS to Pro**](https://rustdesk.com/docs/en/self-host/rustdesk-server-pro/installscript/#convert-from-open-source)
Self-host your own RustDesk server, it is free and open source.
## How to build manually
@@ -22,314 +24,12 @@ Three executables will be generated in target/release.
- hbbr - RustDesk relay server
- rustdesk-utils - RustDesk CLI utilities
You can find updated binaries on the [releases](https://github.com/rustdesk/rustdesk-server/releases) page.
You can find updated binaries on the [Releases](https://github.com/rustdesk/rustdesk-server/releases) page.
If you wanna develop your own server, [rustdesk-server-demo](https://github.com/rustdesk/rustdesk-server-demo) might be a better and simpler start for you than this repo.
If you want extra features, [RustDesk Server Pro](https://rustdesk.com/pricing.html) might suit you better.
## Docker images
If you want to develop your own server, [rustdesk-server-demo](https://github.com/rustdesk/rustdesk-server-demo) might be a better and simpler start for you than this repo.
Docker images are automatically generated and published on every github release. We have 2 kind of images.
## Installation
### Classic image
These images are build against `ubuntu-20.04` with the only addition of the main binaries (`hbbr` and `hbbs`). They're available on [Docker hub](https://hub.docker.com/r/rustdesk/rustdesk-server/) with these tags:
| architecture | image:tag |
| --- | --- |
| amd64 | `rustdesk/rustdesk-server:latest` |
| arm64v8 | `rustdesk/rustdesk-server:latest-arm64v8` |
You can start these images directly with `docker run` with these commands:
```bash
docker run --name hbbs --net=host -v "$PWD/data:/root" -d rustdesk/rustdesk-server:latest hbbs -r <relay-server-ip[:port]>
docker run --name hbbr --net=host -v "$PWD/data:/root" -d rustdesk/rustdesk-server:latest hbbr
```
or without --net=host, but P2P direct connection can not work.
For systems using SELinux, replacing `/root` by `/root:z` is required for the containers to run correctly. Alternatively, SELinux container separation can be disabled completely adding the option `--security-opt label=disable`.
```bash
docker run --name hbbs -p 21115:21115 -p 21116:21116 -p 21116:21116/udp -p 21118:21118 -v "$PWD/data:/root" -d rustdesk/rustdesk-server:latest hbbs -r <relay-server-ip[:port]>
docker run --name hbbr -p 21117:21117 -p 21119:21119 -v "$PWD/data:/root" -d rustdesk/rustdesk-server:latest hbbr
```
The `relay-server-ip` parameter is the IP address (or dns name) of the server running these containers. The **optional** `port` parameter has to be used if you use a port different than **21117** for `hbbr`.
You can also use docker-compose, using this configuration as a template:
```yaml
version: '3'
networks:
rustdesk-net:
external: false
services:
hbbs:
container_name: hbbs
ports:
- 21115:21115
- 21116:21116
- 21116:21116/udp
- 21118:21118
image: rustdesk/rustdesk-server:latest
command: hbbs -r rustdesk.example.com:21117
volumes:
- ./data:/root
networks:
- rustdesk-net
depends_on:
- hbbr
restart: unless-stopped
hbbr:
container_name: hbbr
ports:
- 21117:21117
- 21119:21119
image: rustdesk/rustdesk-server:latest
command: hbbr
volumes:
- ./data:/root
networks:
- rustdesk-net
restart: unless-stopped
```
Edit line 16 to point to your relay server (the one listening on port 21117). You can also edit the volume lines (L18 and L33) if you need.
(docker-compose credit goes to @lukebarone and @QuiGonLeong)
## S6-overlay based images
These images are build against `busybox:stable` with the addition of the binaries (both hbbr and hbbs) and [S6-overlay](https://github.com/just-containers/s6-overlay). They're available on [Docker hub](https://hub.docker.com/r/rustdesk/rustdesk-server-s6/) with these tags:
| architecture | version | image:tag |
| --- | --- | --- |
| multiarch | latest | `rustdesk/rustdesk-server-s6:latest` |
| amd64 | latest | `rustdesk/rustdesk-server-s6:latest-amd64` |
| i386 | latest | `rustdesk/rustdesk-server-s6:latest-i386` |
| arm64v8 | latest | `rustdesk/rustdesk-server-s6:latest-arm64v8` |
| armv7 | latest | `rustdesk/rustdesk-server-s6:latest-armv7` |
| multiarch | 2 | `rustdesk/rustdesk-server-s6:2` |
| amd64 | 2 | `rustdesk/rustdesk-server-s6:2-amd64` |
| i386 | 2 | `rustdesk/rustdesk-server-s6:2-i386` |
| arm64v8 | 2 | `rustdesk/rustdesk-server-s6:2-arm64v8` |
| armv7 | 2 | `rustdesk/rustdesk-server-s6:2-armv7` |
| multiarch | 2.0.0 | `rustdesk/rustdesk-server-s6:2.0.0` |
| amd64 | 2.0.0 | `rustdesk/rustdesk-server-s6:2.0.0-amd64` |
| i386 | 2.0.0 | `rustdesk/rustdesk-server-s6:2.0.0-i386` |
| arm64v8 | 2.0.0 | `rustdesk/rustdesk-server-s6:2.0.0-arm64v8` |
| armv7 | 2.0.0 | `rustdesk/rustdesk-server-s6:2.0.0-armv7` |
You're strongly encuraged to use the `multiarch` image either with the `major version` or `latest` tag.
The S6-overlay acts as a supervisor and keeps both process running, so with this image there's no need to have two separate running containers.
You can start these images directly with `docker run` with this command:
```bash
docker run --name rustdesk-server \
--net=host \
-e "RELAY=rustdeskrelay.example.com" \
-e "ENCRYPTED_ONLY=1" \
-v "$PWD/data:/data" -d rustdesk/rustdesk-server-s6:latest
```
or without --net=host, but P2P direct connection can not work.
```bash
docker run --name rustdesk-server \
-p 21115:21115 -p 21116:21116 -p 21116:21116/udp \
-p 21117:21117 -p 21118:21118 -p 21119:21119 \
-e "RELAY=rustdeskrelay.example.com" \
-e "ENCRYPTED_ONLY=1" \
-v "$PWD/data:/data" -d rustdesk/rustdesk-server-s6:latest
```
Or you can use a docker-compose file:
```yaml
version: '3'
services:
rustdesk-server:
container_name: rustdesk-server
ports:
- 21115:21115
- 21116:21116
- 21116:21116/udp
- 21117:21117
- 21118:21118
- 21119:21119
image: rustdesk/rustdesk-server-s6:latest
environment:
- "RELAY=rustdesk.example.com:21117"
- "ENCRYPTED_ONLY=1"
volumes:
- ./data:/data
restart: unless-stopped
```
For this container image, you can use these environment variables, **in addition** to the ones specified in the following **ENV variables** section:
| variable | optional | description |
| --- | --- | --- |
| RELAY | no | the IP address/DNS name of the machine running this container |
| ENCRYPTED_ONLY | yes | if set to **"1"** unencrypted connection will not be accepted |
| KEY_PUB | yes | public part of the key pair |
| KEY_PRIV | yes | private part of the key pair |
### Secret management in S6-overlay based images
You can obviously keep the key pair in a docker volume, but the best practices tells you to not write the keys on the filesystem; so we provide a couple of options.
On container startup, the presence of the keypair is checked (`/data/id_ed25519.pub` and `/data/id_ed25519`) and if one of these keys doesn't exist, it's recreated from ENV variables or docker secrets.
Then the validity of the keypair is checked: if public and private keys doesn't match, the container will stop.
If you provide no keys, `hbbs` will generate one for you, and it'll place it in the default location.
#### Use ENV to store the key pair
You can use docker environment variables to store the keys. Just follow this examples:
```bash
docker run --name rustdesk-server \
--net=host \
-e "RELAY=rustdeskrelay.example.com" \
-e "ENCRYPTED_ONLY=1" \
-e "DB_URL=/db/db_v2.sqlite3" \
-e "KEY_PRIV=FR2j78IxfwJNR+HjLluQ2Nh7eEryEeIZCwiQDPVe+PaITKyShphHAsPLn7So0OqRs92nGvSRdFJnE2MSyrKTIQ==" \
-e "KEY_PUB=iEyskoaYRwLDy5+0qNDqkbPdpxr0kXRSZxNjEsqykyE=" \
-v "$PWD/db:/db" -d rustdesk/rustdesk-server-s6:latest
```
```yaml
version: '3'
services:
rustdesk-server:
container_name: rustdesk-server
ports:
- 21115:21115
- 21116:21116
- 21116:21116/udp
- 21117:21117
- 21118:21118
- 21119:21119
image: rustdesk/rustdesk-server-s6:latest
environment:
- "RELAY=rustdesk.example.com:21117"
- "ENCRYPTED_ONLY=1"
- "DB_URL=/db/db_v2.sqlite3"
- "KEY_PRIV=FR2j78IxfwJNR+HjLluQ2Nh7eEryEeIZCwiQDPVe+PaITKyShphHAsPLn7So0OqRs92nGvSRdFJnE2MSyrKTIQ=="
- "KEY_PUB=iEyskoaYRwLDy5+0qNDqkbPdpxr0kXRSZxNjEsqykyE="
volumes:
- ./db:/db
restart: unless-stopped
```
#### Use Docker secrets to store the key pair
You can alternatively use docker secrets to store the keys.
This is useful if you're using **docker-compose** or **docker swarm**.
Just follow this examples:
```bash
cat secrets/id_ed25519.pub | docker secret create key_pub -
cat secrets/id_ed25519 | docker secret create key_priv -
docker service create --name rustdesk-server \
--secret key_priv --secret key_pub \
--net=host \
-e "RELAY=rustdeskrelay.example.com" \
-e "ENCRYPTED_ONLY=1" \
-e "DB_URL=/db/db_v2.sqlite3" \
--mount "type=bind,source=$PWD/db,destination=/db" \
rustdesk/rustdesk-server-s6:latest
```
```yaml
version: '3'
services:
rustdesk-server:
container_name: rustdesk-server
ports:
- 21115:21115
- 21116:21116
- 21116:21116/udp
- 21117:21117
- 21118:21118
- 21119:21119
image: rustdesk/rustdesk-server-s6:latest
environment:
- "RELAY=rustdesk.example.com:21117"
- "ENCRYPTED_ONLY=1"
- "DB_URL=/db/db_v2.sqlite3"
volumes:
- ./db:/db
restart: unless-stopped
secrets:
- key_pub
- key_priv
secrets:
key_pub:
file: secrets/id_ed25519.pub
key_priv:
file: secrets/id_ed25519
```
## How to create a keypair
A keypair is needed for encryption; you can provide it, as explained before, but you need a way to create one.
You can use this command to generate a keypair:
```bash
/usr/bin/rustdesk-utils genkeypair
```
If you don't have (or don't want) the `rustdesk-utils` package installed on your system, you can invoke the same command with docker:
```bash
docker run --rm --entrypoint /usr/bin/rustdesk-utils rustdesk/rustdesk-server-s6:latest genkeypair
```
The output will be something like this:
```text
Public Key: 8BLLhtzUBU/XKAH4mep3p+IX4DSApe7qbAwNH9nv4yA=
Secret Key: egAVd44u33ZEUIDTtksGcHeVeAwywarEdHmf99KM5ajwEsuG3NQFT9coAfiZ6nen4hfgNICl7upsDA0f2e/jIA==
```
## .deb packages
Separate .deb packages are available for each binary, you can find them in the [releases](https://github.com/rustdesk/rustdesk-server/releases).
These packages are meant for the following distributions:
- Ubuntu 22.04 LTS
- Ubuntu 20.04 LTS
- Ubuntu 18.04 LTS
- Debian 11 bullseye
- Debian 10 buster
## ENV variables
hbbs and hbbr can be configured using these ENV variables.
You can specify the variables as usual or use an `.env` file.
| variable | binary | description |
| --- | --- | --- |
| ALWAYS_USE_RELAY | hbbs | if set to **"Y"** disallows direct peer connection |
| DB_URL | hbbs | path for database file |
| DOWNGRADE_START_CHECK | hbbr | delay (in seconds) before downgrade check |
| DOWNGRADE_THRESHOLD | hbbr | threshold of downgrade check (bit/ms) |
| KEY | hbbs/hbbr | if set force the use of a specific key, if set to **"_"** force the use of any key |
| LIMIT_SPEED | hbbr | speed limit (in Mb/s) |
| PORT | hbbs/hbbr | listening port (21116 for hbbs - 21117 for hbbr) |
| RELAY_SERVERS | hbbs | IP address/DNS name of the machines running hbbr (separated by comma) |
| RUST_LOG | all | set debug level (error\|warn\|info\|debug\|trace) |
| SINGLE_BANDWIDTH | hbbr | max bandwidth for a single connection (in Mb/s) |
| TOTAL_BANDWIDTH | hbbr | max total bandwidth (in Mb/s) |
Please follow this [doc](https://rustdesk.com/docs/en/self-host/rustdesk-server-oss/)

Binary file not shown.

62
debian/README.source vendored Normal file
View File

@@ -0,0 +1,62 @@
#
# Juli Augustus 2025 was building the downloadable Debian packages done
# by github.com CI, meaning you being depended on github.com
# for getting a .deb of the libre source code (four GNU freedoms) you have.
#
# And in those days there was no official Debian package.
#
# What follows are instructions that make it possible to do
# a succesfull
#
# debuild -uc -us
#
# in the directory target/rustdesk-server
#
# The instructions are written in bash syntax,
#
# bash debian/README.source
#
# should help you immens
#
# Make sure the submodules are present
git submodule update --init --recursive
rm -rf target/rustdesk-server # avoid clutter from previous iteration
mkdir -p target/rustdesk-server # FYI: target/ is ignored by git
tar cf - \
--exclude .git \
Cargo.toml Cargo.lock \
LICENSE README.md \
db_v2.sqlite3 \
build.rs \
debian \
libs rcd src \
systemd ui \
| ( cd target/rustdesk-server && tar x )
mv target/rustdesk-server/debian/Makefile target/rustdesk-server/
tar cjf target/rustdesk-server-orig.tar.xz \
--strip-components=1 \
target/rustdesk-server
# an .orig.tar.xz tarball exists,
# work on the debian directory
eval $( dpkg-architecture )
sed -e "s/{{ ARCH }}/${DEB_TARGET_ARCH}/" \
debian/control.tpl > target/rustdesk-server/debian/control
cd target/rustdesk-server
dpkg-checkbuilddeps || echo sudo apt install dpkg-dev
debuild -uc -us
#
# For what it is worth:
# Early September 2025 there were
# several WARNINGS and ERRORS from Lintian
#
# l l

67
debian/changelog vendored
View File

@@ -1,3 +1,70 @@
rustdesk-server (1.1.15) UNRELEASED; urgency=medium
* Fix: 127.0.0.1 is not loopback (#515)
* Higher default bandwidth
* Define the HOME env to allow running rootless (#612)
* Add option to log (failed) authentication attempts to enable the usage of tools like fail2ban and crowdsec (#435)
* Connection log query
-- rustdesk <info@rustdesk.com> Sat, 10 Jan 2026 16:04:19 +0800
rustdesk-server (1.1.14) UNRELEASED; urgency=medium
* Fix windows crash
-- rustdesk <info@rustdesk.com> Sat, 25 Jan 2025 20:47:19 +0800
rustdesk-server (1.1.13) UNRELEASED; urgency=medium
* Version check and refactor hbb_common to share with rustdesk client
-- rustdesk <info@rustdesk.com> Tue, 21 Jan 2025 01:33:42 +0800
rustdesk-server (1.1.12) UNRELEASED; urgency=medium
* WS real ip
* Bump s6-overlay to v3.2.0.0 and fix env warnings
-- rustdesk <info@rustdesk.com> Mon, 7 Oct 2024 16:21:36 +0800
rustdesk-server (1.1.11-1) UNRELEASED; urgency=medium
* set reuse port to make restart friendly
* revert hbbr `-k` to not ruin back-compatibility
-- rustdesk <info@rustdesk.com> Fri, 24 May 2024 18:37:11 +0800
rustdesk-server (1.1.11) UNRELEASED; urgency=medium
* change -k to default '-', so you need not to set -k any more
-- rustdesk <info@rustdesk.com> Fri, 24 May 2024 18:09:12 +0800
rustdesk-server (1.1.10-3) UNRELEASED; urgency=medium
* fix on -2
-- rustdesk <info@rustdesk.com> Wed, 31 Jan 2024 11:30:42 +0800
rustdesk-server (1.1.10-2) UNRELEASED; urgency=medium
* fix hangup signal exit when run with nohup
* some minors
-- rustdesk <info@rustdesk.com> Tue, 30 Jan 2024 19:23:36 +0800
rustdesk-server (1.1.9) UNRELEASED; urgency=medium
* remove unsafe
-- rustdesk <info@rustdesk.com> Tue, 5 Dec 2023 17:22:40 +0800
rustdesk-server (1.1.8) UNRELEASED; urgency=medium
* fix test_hbbs and mask in lan
-- rustdesk <info@rustdesk.com> Thu, 6 Jul 2023 00:50:11 +0800
rustdesk-server (1.1.7) UNRELEASED; urgency=medium
* ipv6 support

View File

@@ -4,7 +4,7 @@ set -e
SERVICE=rustdesk-hbbr.service
if [ "$1" = "configure" ]; then
mkdir -p /var/log/rustdesk
mkdir -p /var/log/rustdesk-server
fi
case "$1" in

View File

@@ -6,7 +6,7 @@ SERVICE=rustdesk-hbbr.service
systemctl --system daemon-reload >/dev/null || true
if [ "$1" = "purge" ]; then
rm -rf /var/lib/rustdesk-server/ /var/log/rustdesk/rustdesk-hbbr.* /var/log/rustdesk/rustdesk-hbbs.*
rm -rf /var/log/rustdesk-server/rustdesk-hbbr.*
deb-systemd-helper purge "${SERVICE}" >/dev/null || true
deb-systemd-helper unmask "${SERVICE}" >/dev/null || true
fi

View File

@@ -4,7 +4,7 @@ set -e
SERVICE=rustdesk-hbbs.service
if [ "$1" = "configure" ]; then
mkdir -p /var/log/rustdesk
mkdir -p /var/log/rustdesk-server
fi
case "$1" in

View File

@@ -6,7 +6,7 @@ SERVICE=rustdesk-hbbs.service
systemctl --system daemon-reload >/dev/null || true
if [ "$1" = "purge" ]; then
rm -rf /var/lib/rustdesk-server/
rm -rf /var/lib/rustdesk-server/ /var/log/rustdesk-server/rustdesk-hbbs.*
deb-systemd-helper purge "${SERVICE}" >/dev/null || true
deb-systemd-helper unmask "${SERVICE}" >/dev/null || true
fi

View File

@@ -1,4 +1,5 @@
FROM ubuntu:20.04
COPY hbbs /usr/bin/hbbs
COPY hbbr /usr/bin/hbbr
WORKDIR /root
FROM scratch
COPY hbbs /usr/bin/hbbs
COPY hbbr /usr/bin/hbbr
WORKDIR /root
ENV HOME=/root

View File

@@ -1,6 +1,6 @@
FROM busybox:stable
ARG S6_OVERLAY_VERSION=3.1.1.2
ARG S6_OVERLAY_VERSION=3.2.0.0
ARG S6_ARCH=x86_64
ADD https://github.com/just-containers/s6-overlay/releases/download/v${S6_OVERLAY_VERSION}/s6-overlay-noarch.tar.xz /tmp
ADD https://github.com/just-containers/s6-overlay/releases/download/v${S6_OVERLAY_VERSION}/s6-overlay-${S6_ARCH}.tar.xz /tmp
@@ -12,8 +12,8 @@ RUN \
COPY rootfs /
ENV RELAY relay.example.com
ENV ENCRYPTED_ONLY 0
ENV RELAY=relay.example.com
ENV ENCRYPTED_ONLY=0
EXPOSE 21115 21116 21116/udp 21117 21118 21119

87
kubernetes/example.yaml Normal file
View File

@@ -0,0 +1,87 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: rustdesk-server
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: rustdesk
template:
metadata:
labels:
app: rustdesk
spec:
containers:
# id Server (HBBS)
- name: hbbs
image: rustdesk/rustdesk-server:latest
env:
- name: RELAY_HOST
value: "rustdesk.yourdomain.tld"
command: ["hbbs", "-r", "$(RELAY_HOST):21117", "-k", "_"]
volumeMounts:
- mountPath: /root
name: rustdesk-data
ports:
- containerPort: 21115
- containerPort: 21116
protocol: TCP
- containerPort: 21116
protocol: UDP
- containerPort: 21118
# Relay Server (HBBR)
- name: hbbr
image: rustdesk/rustdesk-server:latest
command: ["hbbr", "-k", "_"]
volumeMounts:
- mountPath: /root
name: rustdesk-data
ports:
- containerPort: 21117
- containerPort: 21119
volumes:
- name: rustdesk-data
persistentVolumeClaim:
claimName: rustdesk-pvc
---
apiVersion: v1
kind: Service
metadata:
name: rustdesk-service
spec:
type: LoadBalancer # Or NodePort
selector:
app: rustdesk
ports:
- name: hbbs-tcp
port: 21115
targetPort: 21115
protocol: TCP
- name: hbbs-udp
port: 21116
targetPort: 21116
protocol: UDP
- name: hbbs-tcp-main
port: 21116
targetPort: 21116
protocol: TCP
- name: hbbr-tcp
port: 21117
targetPort: 21117
protocol: TCP
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rustdesk-pvc
spec:
accessModes:
- ReadWriteOnce # RWM would be available only with pro version
resources:
requests:
storage: 1Gi

1
libs/hbb_common Submodule

Submodule libs/hbb_common added at 83419b6549

View File

@@ -1,4 +0,0 @@
/target
**/*.rs.bk
Cargo.lock
src/protos/

View File

@@ -1,51 +0,0 @@
[package]
name = "hbb_common"
version = "0.1.0"
authors = ["open-trade <info@opentradesolutions.com>"]
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
protobuf = { version = "3.1", features = ["with-bytes"] }
tokio = { version = "1.20", features = ["full"] }
tokio-util = { version = "0.7", features = ["full"] }
futures = "0.3"
bytes = { version = "1.2", features = ["serde"] }
log = "0.4"
env_logger = "0.9"
socket2 = { version = "0.3", features = ["reuseport"] }
zstd = "0.9"
quinn = {version = "0.8", optional = true }
anyhow = "1.0"
futures-util = "0.3"
directories-next = "2.0"
rand = "0.8"
serde_derive = "1.0"
serde = "1.0"
lazy_static = "1.4"
confy = { git = "https://github.com/open-trade/confy" }
dirs-next = "2.0"
filetime = "0.2"
sodiumoxide = "0.2"
regex = "1.4"
tokio-socks = { git = "https://github.com/open-trade/tokio-socks" }
chrono = "0.4"
[target.'cfg(not(any(target_os = "android", target_os = "ios")))'.dependencies]
mac_address = "1.1"
machine-uid = "0.2"
[features]
quic = []
flatpak = []
[build-dependencies]
protobuf-codegen = { version = "3.1" }
[target.'cfg(target_os = "windows")'.dependencies]
winapi = { version = "0.3", features = ["winuser"] }
[dev-dependencies]
toml = "0.5"
serde_json = "1.0"

View File

@@ -1,14 +0,0 @@
fn main() {
let out_dir = format!("{}/protos", std::env::var("OUT_DIR").unwrap());
std::fs::create_dir_all(&out_dir).unwrap();
protobuf_codegen::Codegen::new()
.pure()
.out_dir(out_dir)
.inputs(["protos/rendezvous.proto", "protos/message.proto"])
.include("protos")
.customize(protobuf_codegen::Customize::default().tokio_bytes(true))
.run()
.expect("Codegen failed.");
}

View File

@@ -1,638 +0,0 @@
syntax = "proto3";
package hbb;
message EncodedVideoFrame {
bytes data = 1;
bool key = 2;
int64 pts = 3;
}
message EncodedVideoFrames { repeated EncodedVideoFrame frames = 1; }
message RGB { bool compress = 1; }
// planes data send directly in binary for better use arraybuffer on web
message YUV {
bool compress = 1;
int32 stride = 2;
}
message VideoFrame {
oneof union {
EncodedVideoFrames vp9s = 6;
RGB rgb = 7;
YUV yuv = 8;
EncodedVideoFrames h264s = 10;
EncodedVideoFrames h265s = 11;
}
int64 timestamp = 9;
}
message IdPk {
string id = 1;
bytes pk = 2;
}
message DisplayInfo {
sint32 x = 1;
sint32 y = 2;
int32 width = 3;
int32 height = 4;
string name = 5;
bool online = 6;
bool cursor_embedded = 7;
}
message PortForward {
string host = 1;
int32 port = 2;
}
message FileTransfer {
string dir = 1;
bool show_hidden = 2;
}
message LoginRequest {
string username = 1;
bytes password = 2;
string my_id = 4;
string my_name = 5;
OptionMessage option = 6;
oneof union {
FileTransfer file_transfer = 7;
PortForward port_forward = 8;
}
bool video_ack_required = 9;
uint64 session_id = 10;
string version = 11;
}
message ChatMessage { string text = 1; }
message Features {
bool privacy_mode = 1;
}
message SupportedEncoding {
bool h264 = 1;
bool h265 = 2;
}
message PeerInfo {
string username = 1;
string hostname = 2;
string platform = 3;
repeated DisplayInfo displays = 4;
int32 current_display = 5;
bool sas_enabled = 6;
string version = 7;
int32 conn_id = 8;
Features features = 9;
SupportedEncoding encoding = 10;
}
message LoginResponse {
oneof union {
string error = 1;
PeerInfo peer_info = 2;
}
}
message MouseEvent {
int32 mask = 1;
sint32 x = 2;
sint32 y = 3;
repeated ControlKey modifiers = 4;
}
enum KeyboardMode{
Legacy = 0;
Map = 1;
Translate = 2;
Auto = 3;
}
enum ControlKey {
Unknown = 0;
Alt = 1;
Backspace = 2;
CapsLock = 3;
Control = 4;
Delete = 5;
DownArrow = 6;
End = 7;
Escape = 8;
F1 = 9;
F10 = 10;
F11 = 11;
F12 = 12;
F2 = 13;
F3 = 14;
F4 = 15;
F5 = 16;
F6 = 17;
F7 = 18;
F8 = 19;
F9 = 20;
Home = 21;
LeftArrow = 22;
/// meta key (also known as "windows"; "super"; and "command")
Meta = 23;
/// option key on macOS (alt key on Linux and Windows)
Option = 24; // deprecated, use Alt instead
PageDown = 25;
PageUp = 26;
Return = 27;
RightArrow = 28;
Shift = 29;
Space = 30;
Tab = 31;
UpArrow = 32;
Numpad0 = 33;
Numpad1 = 34;
Numpad2 = 35;
Numpad3 = 36;
Numpad4 = 37;
Numpad5 = 38;
Numpad6 = 39;
Numpad7 = 40;
Numpad8 = 41;
Numpad9 = 42;
Cancel = 43;
Clear = 44;
Menu = 45; // deprecated, use Alt instead
Pause = 46;
Kana = 47;
Hangul = 48;
Junja = 49;
Final = 50;
Hanja = 51;
Kanji = 52;
Convert = 53;
Select = 54;
Print = 55;
Execute = 56;
Snapshot = 57;
Insert = 58;
Help = 59;
Sleep = 60;
Separator = 61;
Scroll = 62;
NumLock = 63;
RWin = 64;
Apps = 65;
Multiply = 66;
Add = 67;
Subtract = 68;
Decimal = 69;
Divide = 70;
Equals = 71;
NumpadEnter = 72;
RShift = 73;
RControl = 74;
RAlt = 75;
CtrlAltDel = 100;
LockScreen = 101;
}
message KeyEvent {
bool down = 1;
bool press = 2;
oneof union {
ControlKey control_key = 3;
uint32 chr = 4;
uint32 unicode = 5;
string seq = 6;
}
repeated ControlKey modifiers = 8;
KeyboardMode mode = 9;
}
message CursorData {
uint64 id = 1;
sint32 hotx = 2;
sint32 hoty = 3;
int32 width = 4;
int32 height = 5;
bytes colors = 6;
}
message CursorPosition {
sint32 x = 1;
sint32 y = 2;
}
message Hash {
string salt = 1;
string challenge = 2;
}
message Clipboard {
bool compress = 1;
bytes content = 2;
}
enum FileType {
Dir = 0;
DirLink = 2;
DirDrive = 3;
File = 4;
FileLink = 5;
}
message FileEntry {
FileType entry_type = 1;
string name = 2;
bool is_hidden = 3;
uint64 size = 4;
uint64 modified_time = 5;
}
message FileDirectory {
int32 id = 1;
string path = 2;
repeated FileEntry entries = 3;
}
message ReadDir {
string path = 1;
bool include_hidden = 2;
}
message ReadAllFiles {
int32 id = 1;
string path = 2;
bool include_hidden = 3;
}
message FileAction {
oneof union {
ReadDir read_dir = 1;
FileTransferSendRequest send = 2;
FileTransferReceiveRequest receive = 3;
FileDirCreate create = 4;
FileRemoveDir remove_dir = 5;
FileRemoveFile remove_file = 6;
ReadAllFiles all_files = 7;
FileTransferCancel cancel = 8;
FileTransferSendConfirmRequest send_confirm = 9;
}
}
message FileTransferCancel { int32 id = 1; }
message FileResponse {
oneof union {
FileDirectory dir = 1;
FileTransferBlock block = 2;
FileTransferError error = 3;
FileTransferDone done = 4;
FileTransferDigest digest = 5;
}
}
message FileTransferDigest {
int32 id = 1;
sint32 file_num = 2;
uint64 last_modified = 3;
uint64 file_size = 4;
bool is_upload = 5;
}
message FileTransferBlock {
int32 id = 1;
sint32 file_num = 2;
bytes data = 3;
bool compressed = 4;
uint32 blk_id = 5;
}
message FileTransferError {
int32 id = 1;
string error = 2;
sint32 file_num = 3;
}
message FileTransferSendRequest {
int32 id = 1;
string path = 2;
bool include_hidden = 3;
int32 file_num = 4;
}
message FileTransferSendConfirmRequest {
int32 id = 1;
sint32 file_num = 2;
oneof union {
bool skip = 3;
uint32 offset_blk = 4;
}
}
message FileTransferDone {
int32 id = 1;
sint32 file_num = 2;
}
message FileTransferReceiveRequest {
int32 id = 1;
string path = 2; // path written to
repeated FileEntry files = 3;
int32 file_num = 4;
}
message FileRemoveDir {
int32 id = 1;
string path = 2;
bool recursive = 3;
}
message FileRemoveFile {
int32 id = 1;
string path = 2;
sint32 file_num = 3;
}
message FileDirCreate {
int32 id = 1;
string path = 2;
}
// main logic from freeRDP
message CliprdrMonitorReady {
}
message CliprdrFormat {
int32 id = 2;
string format = 3;
}
message CliprdrServerFormatList {
repeated CliprdrFormat formats = 2;
}
message CliprdrServerFormatListResponse {
int32 msg_flags = 2;
}
message CliprdrServerFormatDataRequest {
int32 requested_format_id = 2;
}
message CliprdrServerFormatDataResponse {
int32 msg_flags = 2;
bytes format_data = 3;
}
message CliprdrFileContentsRequest {
int32 stream_id = 2;
int32 list_index = 3;
int32 dw_flags = 4;
int32 n_position_low = 5;
int32 n_position_high = 6;
int32 cb_requested = 7;
bool have_clip_data_id = 8;
int32 clip_data_id = 9;
}
message CliprdrFileContentsResponse {
int32 msg_flags = 3;
int32 stream_id = 4;
bytes requested_data = 5;
}
message Cliprdr {
oneof union {
CliprdrMonitorReady ready = 1;
CliprdrServerFormatList format_list = 2;
CliprdrServerFormatListResponse format_list_response = 3;
CliprdrServerFormatDataRequest format_data_request = 4;
CliprdrServerFormatDataResponse format_data_response = 5;
CliprdrFileContentsRequest file_contents_request = 6;
CliprdrFileContentsResponse file_contents_response = 7;
}
}
message SwitchDisplay {
int32 display = 1;
sint32 x = 2;
sint32 y = 3;
int32 width = 4;
int32 height = 5;
bool cursor_embedded = 6;
}
message PermissionInfo {
enum Permission {
Keyboard = 0;
Clipboard = 2;
Audio = 3;
File = 4;
Restart = 5;
Recording = 6;
}
Permission permission = 1;
bool enabled = 2;
}
enum ImageQuality {
NotSet = 0;
Low = 2;
Balanced = 3;
Best = 4;
}
message VideoCodecState {
enum PreferCodec {
Auto = 0;
VPX = 1;
H264 = 2;
H265 = 3;
}
int32 score_vpx = 1;
int32 score_h264 = 2;
int32 score_h265 = 3;
PreferCodec prefer = 4;
}
message OptionMessage {
enum BoolOption {
NotSet = 0;
No = 1;
Yes = 2;
}
ImageQuality image_quality = 1;
BoolOption lock_after_session_end = 2;
BoolOption show_remote_cursor = 3;
BoolOption privacy_mode = 4;
BoolOption block_input = 5;
int32 custom_image_quality = 6;
BoolOption disable_audio = 7;
BoolOption disable_clipboard = 8;
BoolOption enable_file_transfer = 9;
VideoCodecState video_codec_state = 10;
int32 custom_fps = 11;
}
message TestDelay {
int64 time = 1;
bool from_client = 2;
uint32 last_delay = 3;
uint32 target_bitrate = 4;
}
message PublicKey {
bytes asymmetric_value = 1;
bytes symmetric_value = 2;
}
message SignedId { bytes id = 1; }
message AudioFormat {
uint32 sample_rate = 1;
uint32 channels = 2;
}
message AudioFrame {
bytes data = 1;
int64 timestamp = 2;
}
// Notify peer to show message box.
message MessageBox {
// Message type. Refer to flutter/lib/common.dart/msgBox().
string msgtype = 1;
string title = 2;
// English
string text = 3;
// If not empty, msgbox provides a button to following the link.
// The link here can't be directly http url.
// It must be the key of http url configed in peer side or "rustdesk://*" (jump in app).
string link = 4;
}
message BackNotification {
// no need to consider block input by someone else
enum BlockInputState {
BlkStateUnknown = 0;
BlkOnSucceeded = 2;
BlkOnFailed = 3;
BlkOffSucceeded = 4;
BlkOffFailed = 5;
}
enum PrivacyModeState {
PrvStateUnknown = 0;
// Privacy mode on by someone else
PrvOnByOther = 2;
// Privacy mode is not supported on the remote side
PrvNotSupported = 3;
// Privacy mode on by self
PrvOnSucceeded = 4;
// Privacy mode on by self, but denied
PrvOnFailedDenied = 5;
// Some plugins are not found
PrvOnFailedPlugin = 6;
// Privacy mode on by self, but failed
PrvOnFailed = 7;
// Privacy mode off by self
PrvOffSucceeded = 8;
// Ctrl + P
PrvOffByPeer = 9;
// Privacy mode off by self, but failed
PrvOffFailed = 10;
PrvOffUnknown = 11;
}
oneof union {
PrivacyModeState privacy_mode_state = 1;
BlockInputState block_input_state = 2;
}
}
message ElevationRequestWithLogon {
string username = 1;
string password = 2;
}
message ElevationRequest {
oneof union {
bool direct = 1;
ElevationRequestWithLogon logon = 2;
}
}
message SwitchSidesRequest {
bytes uuid = 1;
}
message SwitchSidesResponse {
bytes uuid = 1;
LoginRequest lr = 2;
}
message SwitchBack {}
message Misc {
oneof union {
ChatMessage chat_message = 4;
SwitchDisplay switch_display = 5;
PermissionInfo permission_info = 6;
OptionMessage option = 7;
AudioFormat audio_format = 8;
string close_reason = 9;
bool refresh_video = 10;
bool video_received = 12;
BackNotification back_notification = 13;
bool restart_remote_device = 14;
bool uac = 15;
bool foreground_window_elevated = 16;
bool stop_service = 17;
ElevationRequest elevation_request = 18;
string elevation_response = 19;
bool portable_service_running = 20;
SwitchSidesRequest switch_sides_request = 21;
SwitchBack switch_back = 22;
}
}
message VoiceCallRequest {
int64 req_timestamp = 1;
// Indicates whether the request is a connect action or a disconnect action.
bool is_connect = 2;
}
message VoiceCallResponse {
bool accepted = 1;
int64 req_timestamp = 2; // Should copy from [VoiceCallRequest::req_timestamp].
int64 ack_timestamp = 3;
}
message Message {
oneof union {
SignedId signed_id = 3;
PublicKey public_key = 4;
TestDelay test_delay = 5;
VideoFrame video_frame = 6;
LoginRequest login_request = 7;
LoginResponse login_response = 8;
Hash hash = 9;
MouseEvent mouse_event = 10;
AudioFrame audio_frame = 11;
CursorData cursor_data = 12;
CursorPosition cursor_position = 13;
uint64 cursor_id = 14;
KeyEvent key_event = 15;
Clipboard clipboard = 16;
FileAction file_action = 17;
FileResponse file_response = 18;
Misc misc = 19;
Cliprdr cliprdr = 20;
MessageBox message_box = 21;
SwitchSidesResponse switch_sides_response = 22;
VoiceCallRequest voice_call_request = 23;
VoiceCallResponse voice_call_response = 24;
}
}

View File

@@ -1,182 +0,0 @@
syntax = "proto3";
package hbb;
message RegisterPeer {
string id = 1;
int32 serial = 2;
}
enum ConnType {
DEFAULT_CONN = 0;
FILE_TRANSFER = 1;
PORT_FORWARD = 2;
RDP = 3;
}
message RegisterPeerResponse { bool request_pk = 2; }
message PunchHoleRequest {
string id = 1;
NatType nat_type = 2;
string licence_key = 3;
ConnType conn_type = 4;
string token = 5;
}
message PunchHole {
bytes socket_addr = 1;
string relay_server = 2;
NatType nat_type = 3;
}
message TestNatRequest {
int32 serial = 1;
}
// per my test, uint/int has no difference in encoding, int not good for negative, use sint for negative
message TestNatResponse {
int32 port = 1;
ConfigUpdate cu = 2; // for mobile
}
enum NatType {
UNKNOWN_NAT = 0;
ASYMMETRIC = 1;
SYMMETRIC = 2;
}
message PunchHoleSent {
bytes socket_addr = 1;
string id = 2;
string relay_server = 3;
NatType nat_type = 4;
string version = 5;
}
message RegisterPk {
string id = 1;
bytes uuid = 2;
bytes pk = 3;
string old_id = 4;
}
message RegisterPkResponse {
enum Result {
OK = 0;
UUID_MISMATCH = 2;
ID_EXISTS = 3;
TOO_FREQUENT = 4;
INVALID_ID_FORMAT = 5;
NOT_SUPPORT = 6;
SERVER_ERROR = 7;
}
Result result = 1;
}
message PunchHoleResponse {
bytes socket_addr = 1;
bytes pk = 2;
enum Failure {
ID_NOT_EXIST = 0;
OFFLINE = 2;
LICENSE_MISMATCH = 3;
LICENSE_OVERUSE = 4;
}
Failure failure = 3;
string relay_server = 4;
oneof union {
NatType nat_type = 5;
bool is_local = 6;
}
string other_failure = 7;
}
message ConfigUpdate {
int32 serial = 1;
repeated string rendezvous_servers = 2;
}
message RequestRelay {
string id = 1;
string uuid = 2;
bytes socket_addr = 3;
string relay_server = 4;
bool secure = 5;
string licence_key = 6;
ConnType conn_type = 7;
string token = 8;
}
message RelayResponse {
bytes socket_addr = 1;
string uuid = 2;
string relay_server = 3;
oneof union {
string id = 4;
bytes pk = 5;
}
string refuse_reason = 6;
string version = 7;
}
message SoftwareUpdate { string url = 1; }
// if in same intranet, punch hole won't work both for udp and tcp,
// even some router has below connection error if we connect itself,
// { kind: Other, error: "could not resolve to any address" },
// so we request local address to connect.
message FetchLocalAddr {
bytes socket_addr = 1;
string relay_server = 2;
}
message LocalAddr {
bytes socket_addr = 1;
bytes local_addr = 2;
string relay_server = 3;
string id = 4;
string version = 5;
}
message PeerDiscovery {
string cmd = 1;
string mac = 2;
string id = 3;
string username = 4;
string hostname = 5;
string platform = 6;
string misc = 7;
}
message OnlineRequest {
string id = 1;
repeated string peers = 2;
}
message OnlineResponse {
bytes states = 1;
}
message RendezvousMessage {
oneof union {
RegisterPeer register_peer = 6;
RegisterPeerResponse register_peer_response = 7;
PunchHoleRequest punch_hole_request = 8;
PunchHole punch_hole = 9;
PunchHoleSent punch_hole_sent = 10;
PunchHoleResponse punch_hole_response = 11;
FetchLocalAddr fetch_local_addr = 12;
LocalAddr local_addr = 13;
ConfigUpdate configure_update = 14;
RegisterPk register_pk = 15;
RegisterPkResponse register_pk_response = 16;
SoftwareUpdate software_update = 17;
RequestRelay request_relay = 18;
RelayResponse relay_response = 19;
TestNatRequest test_nat_request = 20;
TestNatResponse test_nat_response = 21;
PeerDiscovery peer_discovery = 22;
OnlineRequest online_request = 23;
OnlineResponse online_response = 24;
}
}

View File

@@ -1,280 +0,0 @@
use bytes::{Buf, BufMut, Bytes, BytesMut};
use std::io;
use tokio_util::codec::{Decoder, Encoder};
#[derive(Debug, Clone, Copy)]
pub struct BytesCodec {
state: DecodeState,
raw: bool,
max_packet_length: usize,
}
#[derive(Debug, Clone, Copy)]
enum DecodeState {
Head,
Data(usize),
}
impl Default for BytesCodec {
fn default() -> Self {
Self::new()
}
}
impl BytesCodec {
pub fn new() -> Self {
Self {
state: DecodeState::Head,
raw: false,
max_packet_length: usize::MAX,
}
}
pub fn set_raw(&mut self) {
self.raw = true;
}
pub fn set_max_packet_length(&mut self, n: usize) {
self.max_packet_length = n;
}
fn decode_head(&mut self, src: &mut BytesMut) -> io::Result<Option<usize>> {
if src.is_empty() {
return Ok(None);
}
let head_len = ((src[0] & 0x3) + 1) as usize;
if src.len() < head_len {
return Ok(None);
}
let mut n = src[0] as usize;
if head_len > 1 {
n |= (src[1] as usize) << 8;
}
if head_len > 2 {
n |= (src[2] as usize) << 16;
}
if head_len > 3 {
n |= (src[3] as usize) << 24;
}
n >>= 2;
if n > self.max_packet_length {
return Err(io::Error::new(io::ErrorKind::InvalidData, "Too big packet"));
}
src.advance(head_len);
src.reserve(n);
Ok(Some(n))
}
fn decode_data(&self, n: usize, src: &mut BytesMut) -> io::Result<Option<BytesMut>> {
if src.len() < n {
return Ok(None);
}
Ok(Some(src.split_to(n)))
}
}
impl Decoder for BytesCodec {
type Item = BytesMut;
type Error = io::Error;
fn decode(&mut self, src: &mut BytesMut) -> Result<Option<BytesMut>, io::Error> {
if self.raw {
if !src.is_empty() {
let len = src.len();
return Ok(Some(src.split_to(len)));
} else {
return Ok(None);
}
}
let n = match self.state {
DecodeState::Head => match self.decode_head(src)? {
Some(n) => {
self.state = DecodeState::Data(n);
n
}
None => return Ok(None),
},
DecodeState::Data(n) => n,
};
match self.decode_data(n, src)? {
Some(data) => {
self.state = DecodeState::Head;
Ok(Some(data))
}
None => Ok(None),
}
}
}
impl Encoder<Bytes> for BytesCodec {
type Error = io::Error;
fn encode(&mut self, data: Bytes, buf: &mut BytesMut) -> Result<(), io::Error> {
if self.raw {
buf.reserve(data.len());
buf.put(data);
return Ok(());
}
if data.len() <= 0x3F {
buf.put_u8((data.len() << 2) as u8);
} else if data.len() <= 0x3FFF {
buf.put_u16_le((data.len() << 2) as u16 | 0x1);
} else if data.len() <= 0x3FFFFF {
let h = (data.len() << 2) as u32 | 0x2;
buf.put_u16_le((h & 0xFFFF) as u16);
buf.put_u8((h >> 16) as u8);
} else if data.len() <= 0x3FFFFFFF {
buf.put_u32_le((data.len() << 2) as u32 | 0x3);
} else {
return Err(io::Error::new(io::ErrorKind::InvalidInput, "Overflow"));
}
buf.extend(data);
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_codec1() {
let mut codec = BytesCodec::new();
let mut buf = BytesMut::new();
let mut bytes: Vec<u8> = Vec::new();
bytes.resize(0x3F, 1);
assert!(codec.encode(bytes.into(), &mut buf).is_ok());
let buf_saved = buf.clone();
assert_eq!(buf.len(), 0x3F + 1);
if let Ok(Some(res)) = codec.decode(&mut buf) {
assert_eq!(res.len(), 0x3F);
assert_eq!(res[0], 1);
} else {
panic!();
}
let mut codec2 = BytesCodec::new();
let mut buf2 = BytesMut::new();
if let Ok(None) = codec2.decode(&mut buf2) {
} else {
panic!();
}
buf2.extend(&buf_saved[0..1]);
if let Ok(None) = codec2.decode(&mut buf2) {
} else {
panic!();
}
buf2.extend(&buf_saved[1..]);
if let Ok(Some(res)) = codec2.decode(&mut buf2) {
assert_eq!(res.len(), 0x3F);
assert_eq!(res[0], 1);
} else {
panic!();
}
}
#[test]
fn test_codec2() {
let mut codec = BytesCodec::new();
let mut buf = BytesMut::new();
let mut bytes: Vec<u8> = Vec::new();
assert!(codec.encode("".into(), &mut buf).is_ok());
assert_eq!(buf.len(), 1);
bytes.resize(0x3F + 1, 2);
assert!(codec.encode(bytes.into(), &mut buf).is_ok());
assert_eq!(buf.len(), 0x3F + 2 + 2);
if let Ok(Some(res)) = codec.decode(&mut buf) {
assert_eq!(res.len(), 0);
} else {
panic!();
}
if let Ok(Some(res)) = codec.decode(&mut buf) {
assert_eq!(res.len(), 0x3F + 1);
assert_eq!(res[0], 2);
} else {
panic!();
}
}
#[test]
fn test_codec3() {
let mut codec = BytesCodec::new();
let mut buf = BytesMut::new();
let mut bytes: Vec<u8> = Vec::new();
bytes.resize(0x3F - 1, 3);
assert!(codec.encode(bytes.into(), &mut buf).is_ok());
assert_eq!(buf.len(), 0x3F + 1 - 1);
if let Ok(Some(res)) = codec.decode(&mut buf) {
assert_eq!(res.len(), 0x3F - 1);
assert_eq!(res[0], 3);
} else {
panic!();
}
}
#[test]
fn test_codec4() {
let mut codec = BytesCodec::new();
let mut buf = BytesMut::new();
let mut bytes: Vec<u8> = Vec::new();
bytes.resize(0x3FFF, 4);
assert!(codec.encode(bytes.into(), &mut buf).is_ok());
assert_eq!(buf.len(), 0x3FFF + 2);
if let Ok(Some(res)) = codec.decode(&mut buf) {
assert_eq!(res.len(), 0x3FFF);
assert_eq!(res[0], 4);
} else {
panic!();
}
}
#[test]
fn test_codec5() {
let mut codec = BytesCodec::new();
let mut buf = BytesMut::new();
let mut bytes: Vec<u8> = Vec::new();
bytes.resize(0x3FFFFF, 5);
assert!(codec.encode(bytes.into(), &mut buf).is_ok());
assert_eq!(buf.len(), 0x3FFFFF + 3);
if let Ok(Some(res)) = codec.decode(&mut buf) {
assert_eq!(res.len(), 0x3FFFFF);
assert_eq!(res[0], 5);
} else {
panic!();
}
}
#[test]
fn test_codec6() {
let mut codec = BytesCodec::new();
let mut buf = BytesMut::new();
let mut bytes: Vec<u8> = Vec::new();
bytes.resize(0x3FFFFF + 1, 6);
assert!(codec.encode(bytes.into(), &mut buf).is_ok());
let buf_saved = buf.clone();
assert_eq!(buf.len(), 0x3FFFFF + 4 + 1);
if let Ok(Some(res)) = codec.decode(&mut buf) {
assert_eq!(res.len(), 0x3FFFFF + 1);
assert_eq!(res[0], 6);
} else {
panic!();
}
let mut codec2 = BytesCodec::new();
let mut buf2 = BytesMut::new();
buf2.extend(&buf_saved[0..1]);
if let Ok(None) = codec2.decode(&mut buf2) {
} else {
panic!();
}
buf2.extend(&buf_saved[1..6]);
if let Ok(None) = codec2.decode(&mut buf2) {
} else {
panic!();
}
buf2.extend(&buf_saved[6..]);
if let Ok(Some(res)) = codec2.decode(&mut buf2) {
assert_eq!(res.len(), 0x3FFFFF + 1);
assert_eq!(res[0], 6);
} else {
panic!();
}
}
}

View File

@@ -1,45 +0,0 @@
use std::cell::RefCell;
use zstd::block::{Compressor, Decompressor};
thread_local! {
static COMPRESSOR: RefCell<Compressor> = RefCell::new(Compressor::new());
static DECOMPRESSOR: RefCell<Decompressor> = RefCell::new(Decompressor::new());
}
/// The library supports regular compression levels from 1 up to ZSTD_maxCLevel(),
/// which is currently 22. Levels >= 20
/// Default level is ZSTD_CLEVEL_DEFAULT==3.
/// value 0 means default, which is controlled by ZSTD_CLEVEL_DEFAULT
pub fn compress(data: &[u8], level: i32) -> Vec<u8> {
let mut out = Vec::new();
COMPRESSOR.with(|c| {
if let Ok(mut c) = c.try_borrow_mut() {
match c.compress(data, level) {
Ok(res) => out = res,
Err(err) => {
crate::log::debug!("Failed to compress: {}", err);
}
}
}
});
out
}
pub fn decompress(data: &[u8]) -> Vec<u8> {
let mut out = Vec::new();
DECOMPRESSOR.with(|d| {
if let Ok(mut d) = d.try_borrow_mut() {
const MAX: usize = 1024 * 1024 * 64;
const MIN: usize = 1024 * 1024;
let mut n = 30 * data.len();
n = n.clamp(MIN, MAX);
match d.decompress(data, n) {
Ok(res) => out = res,
Err(err) => {
crate::log::debug!("Failed to decompress: {}", err);
}
}
}
});
out
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,839 +0,0 @@
#[cfg(windows)]
use std::os::windows::prelude::*;
use std::path::{Path, PathBuf};
use std::time::{Duration, SystemTime, UNIX_EPOCH};
use serde_derive::{Deserialize, Serialize};
use tokio::{fs::File, io::*};
use crate::{bail, get_version_number, message_proto::*, ResultType, Stream};
// https://doc.rust-lang.org/std/os/windows/fs/trait.MetadataExt.html
use crate::{
compress::{compress, decompress},
config::{Config, COMPRESS_LEVEL},
};
pub fn read_dir(path: &Path, include_hidden: bool) -> ResultType<FileDirectory> {
let mut dir = FileDirectory {
path: get_string(path),
..Default::default()
};
#[cfg(windows)]
if "/" == &get_string(path) {
let drives = unsafe { winapi::um::fileapi::GetLogicalDrives() };
for i in 0..32 {
if drives & (1 << i) != 0 {
let name = format!(
"{}:",
std::char::from_u32('A' as u32 + i as u32).unwrap_or('A')
);
dir.entries.push(FileEntry {
name,
entry_type: FileType::DirDrive.into(),
..Default::default()
});
}
}
return Ok(dir);
}
for entry in path.read_dir()?.flatten() {
let p = entry.path();
let name = p
.file_name()
.map(|p| p.to_str().unwrap_or(""))
.unwrap_or("")
.to_owned();
if name.is_empty() {
continue;
}
let mut is_hidden = false;
let meta;
if let Ok(tmp) = std::fs::symlink_metadata(&p) {
meta = tmp;
} else {
continue;
}
// docs.microsoft.com/en-us/windows/win32/fileio/file-attribute-constants
#[cfg(windows)]
if meta.file_attributes() & 0x2 != 0 {
is_hidden = true;
}
#[cfg(not(windows))]
if name.find('.').unwrap_or(usize::MAX) == 0 {
is_hidden = true;
}
if is_hidden && !include_hidden {
continue;
}
let (entry_type, size) = {
if p.is_dir() {
if meta.file_type().is_symlink() {
(FileType::DirLink.into(), 0)
} else {
(FileType::Dir.into(), 0)
}
} else if meta.file_type().is_symlink() {
(FileType::FileLink.into(), 0)
} else {
(FileType::File.into(), meta.len())
}
};
let modified_time = meta
.modified()
.map(|x| {
x.duration_since(std::time::SystemTime::UNIX_EPOCH)
.map(|x| x.as_secs())
.unwrap_or(0)
})
.unwrap_or(0);
dir.entries.push(FileEntry {
name: get_file_name(&p),
entry_type,
is_hidden,
size,
modified_time,
..Default::default()
});
}
Ok(dir)
}
#[inline]
pub fn get_file_name(p: &Path) -> String {
p.file_name()
.map(|p| p.to_str().unwrap_or(""))
.unwrap_or("")
.to_owned()
}
#[inline]
pub fn get_string(path: &Path) -> String {
path.to_str().unwrap_or("").to_owned()
}
#[inline]
pub fn get_path(path: &str) -> PathBuf {
Path::new(path).to_path_buf()
}
#[inline]
pub fn get_home_as_string() -> String {
get_string(&Config::get_home())
}
fn read_dir_recursive(
path: &PathBuf,
prefix: &Path,
include_hidden: bool,
) -> ResultType<Vec<FileEntry>> {
let mut files = Vec::new();
if path.is_dir() {
// to-do: symbol link handling, cp the link rather than the content
// to-do: file mode, for unix
let fd = read_dir(path, include_hidden)?;
for entry in fd.entries.iter() {
match entry.entry_type.enum_value() {
Ok(FileType::File) => {
let mut entry = entry.clone();
entry.name = get_string(&prefix.join(entry.name));
files.push(entry);
}
Ok(FileType::Dir) => {
if let Ok(mut tmp) = read_dir_recursive(
&path.join(&entry.name),
&prefix.join(&entry.name),
include_hidden,
) {
for entry in tmp.drain(0..) {
files.push(entry);
}
}
}
_ => {}
}
}
Ok(files)
} else if path.is_file() {
let (size, modified_time) = if let Ok(meta) = std::fs::metadata(path) {
(
meta.len(),
meta.modified()
.map(|x| {
x.duration_since(std::time::SystemTime::UNIX_EPOCH)
.map(|x| x.as_secs())
.unwrap_or(0)
})
.unwrap_or(0),
)
} else {
(0, 0)
};
files.push(FileEntry {
entry_type: FileType::File.into(),
size,
modified_time,
..Default::default()
});
Ok(files)
} else {
bail!("Not exists");
}
}
pub fn get_recursive_files(path: &str, include_hidden: bool) -> ResultType<Vec<FileEntry>> {
read_dir_recursive(&get_path(path), &get_path(""), include_hidden)
}
#[inline]
pub fn is_file_exists(file_path: &str) -> bool {
return Path::new(file_path).exists();
}
#[inline]
pub fn can_enable_overwrite_detection(version: i64) -> bool {
version >= get_version_number("1.1.10")
}
#[derive(Default)]
pub struct TransferJob {
pub id: i32,
pub remote: String,
pub path: PathBuf,
pub show_hidden: bool,
pub is_remote: bool,
pub is_last_job: bool,
pub file_num: i32,
pub files: Vec<FileEntry>,
file: Option<File>,
total_size: u64,
finished_size: u64,
transferred: u64,
enable_overwrite_detection: bool,
file_confirmed: bool,
// indicating the last file is skipped
file_skipped: bool,
file_is_waiting: bool,
default_overwrite_strategy: Option<bool>,
}
#[derive(Debug, Default, Serialize, Deserialize, Clone)]
pub struct TransferJobMeta {
#[serde(default)]
pub id: i32,
#[serde(default)]
pub remote: String,
#[serde(default)]
pub to: String,
#[serde(default)]
pub show_hidden: bool,
#[serde(default)]
pub file_num: i32,
#[serde(default)]
pub is_remote: bool,
}
#[derive(Debug, Default, Serialize, Deserialize, Clone)]
pub struct RemoveJobMeta {
#[serde(default)]
pub path: String,
#[serde(default)]
pub is_remote: bool,
#[serde(default)]
pub no_confirm: bool,
}
#[inline]
fn get_ext(name: &str) -> &str {
if let Some(i) = name.rfind('.') {
return &name[i + 1..];
}
""
}
#[inline]
fn is_compressed_file(name: &str) -> bool {
let ext = get_ext(name);
ext == "xz"
|| ext == "gz"
|| ext == "zip"
|| ext == "7z"
|| ext == "rar"
|| ext == "bz2"
|| ext == "tgz"
|| ext == "png"
|| ext == "jpg"
}
impl TransferJob {
#[allow(clippy::too_many_arguments)]
pub fn new_write(
id: i32,
remote: String,
path: String,
file_num: i32,
show_hidden: bool,
is_remote: bool,
files: Vec<FileEntry>,
enable_overwrite_detection: bool,
) -> Self {
log::info!("new write {}", path);
let total_size = files.iter().map(|x| x.size).sum();
Self {
id,
remote,
path: get_path(&path),
file_num,
show_hidden,
is_remote,
files,
total_size,
enable_overwrite_detection,
..Default::default()
}
}
pub fn new_read(
id: i32,
remote: String,
path: String,
file_num: i32,
show_hidden: bool,
is_remote: bool,
enable_overwrite_detection: bool,
) -> ResultType<Self> {
log::info!("new read {}", path);
let files = get_recursive_files(&path, show_hidden)?;
let total_size = files.iter().map(|x| x.size).sum();
Ok(Self {
id,
remote,
path: get_path(&path),
file_num,
show_hidden,
is_remote,
files,
total_size,
enable_overwrite_detection,
..Default::default()
})
}
#[inline]
pub fn files(&self) -> &Vec<FileEntry> {
&self.files
}
#[inline]
pub fn set_files(&mut self, files: Vec<FileEntry>) {
self.files = files;
}
#[inline]
pub fn id(&self) -> i32 {
self.id
}
#[inline]
pub fn total_size(&self) -> u64 {
self.total_size
}
#[inline]
pub fn finished_size(&self) -> u64 {
self.finished_size
}
#[inline]
pub fn transferred(&self) -> u64 {
self.transferred
}
#[inline]
pub fn file_num(&self) -> i32 {
self.file_num
}
pub fn modify_time(&self) {
let file_num = self.file_num as usize;
if file_num < self.files.len() {
let entry = &self.files[file_num];
let path = self.join(&entry.name);
let download_path = format!("{}.download", get_string(&path));
std::fs::rename(download_path, &path).ok();
filetime::set_file_mtime(
&path,
filetime::FileTime::from_unix_time(entry.modified_time as _, 0),
)
.ok();
}
}
pub fn remove_download_file(&self) {
let file_num = self.file_num as usize;
if file_num < self.files.len() {
let entry = &self.files[file_num];
let path = self.join(&entry.name);
let download_path = format!("{}.download", get_string(&path));
std::fs::remove_file(download_path).ok();
}
}
pub async fn write(&mut self, block: FileTransferBlock) -> ResultType<()> {
if block.id != self.id {
bail!("Wrong id");
}
let file_num = block.file_num as usize;
if file_num >= self.files.len() {
bail!("Wrong file number");
}
if file_num != self.file_num as usize || self.file.is_none() {
self.modify_time();
if let Some(file) = self.file.as_mut() {
file.sync_all().await?;
}
self.file_num = block.file_num;
let entry = &self.files[file_num];
let path = self.join(&entry.name);
if let Some(p) = path.parent() {
std::fs::create_dir_all(p).ok();
}
let path = format!("{}.download", get_string(&path));
self.file = Some(File::create(&path).await?);
}
if block.compressed {
let tmp = decompress(&block.data);
self.file.as_mut().unwrap().write_all(&tmp).await?;
self.finished_size += tmp.len() as u64;
} else {
self.file.as_mut().unwrap().write_all(&block.data).await?;
self.finished_size += block.data.len() as u64;
}
self.transferred += block.data.len() as u64;
Ok(())
}
#[inline]
pub fn join(&self, name: &str) -> PathBuf {
if name.is_empty() {
self.path.clone()
} else {
self.path.join(name)
}
}
pub async fn read(&mut self, stream: &mut Stream) -> ResultType<Option<FileTransferBlock>> {
let file_num = self.file_num as usize;
if file_num >= self.files.len() {
self.file.take();
return Ok(None);
}
let name = &self.files[file_num].name;
if self.file.is_none() {
match File::open(self.join(name)).await {
Ok(file) => {
self.file = Some(file);
self.file_confirmed = false;
self.file_is_waiting = false;
}
Err(err) => {
self.file_num += 1;
self.file_confirmed = false;
self.file_is_waiting = false;
return Err(err.into());
}
}
}
if self.enable_overwrite_detection && !self.file_confirmed() {
if !self.file_is_waiting() {
self.send_current_digest(stream).await?;
self.set_file_is_waiting(true);
}
return Ok(None);
}
const BUF_SIZE: usize = 128 * 1024;
let mut buf: Vec<u8> = vec![0; BUF_SIZE];
let mut compressed = false;
let mut offset: usize = 0;
loop {
match self.file.as_mut().unwrap().read(&mut buf[offset..]).await {
Err(err) => {
self.file_num += 1;
self.file = None;
self.file_confirmed = false;
self.file_is_waiting = false;
return Err(err.into());
}
Ok(n) => {
offset += n;
if n == 0 || offset == BUF_SIZE {
break;
}
}
}
}
unsafe { buf.set_len(offset) };
if offset == 0 {
self.file_num += 1;
self.file = None;
self.file_confirmed = false;
self.file_is_waiting = false;
} else {
self.finished_size += offset as u64;
if !is_compressed_file(name) {
let tmp = compress(&buf, COMPRESS_LEVEL);
if tmp.len() < buf.len() {
buf = tmp;
compressed = true;
}
}
self.transferred += buf.len() as u64;
}
Ok(Some(FileTransferBlock {
id: self.id,
file_num: file_num as _,
data: buf.into(),
compressed,
..Default::default()
}))
}
async fn send_current_digest(&mut self, stream: &mut Stream) -> ResultType<()> {
let mut msg = Message::new();
let mut resp = FileResponse::new();
let meta = self.file.as_ref().unwrap().metadata().await?;
let last_modified = meta
.modified()?
.duration_since(SystemTime::UNIX_EPOCH)?
.as_secs();
resp.set_digest(FileTransferDigest {
id: self.id,
file_num: self.file_num,
last_modified,
file_size: meta.len(),
..Default::default()
});
msg.set_file_response(resp);
stream.send(&msg).await?;
log::info!(
"id: {}, file_num:{}, digest message is sent. waiting for confirm. msg: {:?}",
self.id,
self.file_num,
msg
);
Ok(())
}
pub fn set_overwrite_strategy(&mut self, overwrite_strategy: Option<bool>) {
self.default_overwrite_strategy = overwrite_strategy;
}
pub fn default_overwrite_strategy(&self) -> Option<bool> {
self.default_overwrite_strategy
}
pub fn set_file_confirmed(&mut self, file_confirmed: bool) {
log::info!("id: {}, file_confirmed: {}", self.id, file_confirmed);
self.file_confirmed = file_confirmed;
self.file_skipped = false;
}
pub fn set_file_is_waiting(&mut self, file_is_waiting: bool) {
self.file_is_waiting = file_is_waiting;
}
#[inline]
pub fn file_is_waiting(&self) -> bool {
self.file_is_waiting
}
#[inline]
pub fn file_confirmed(&self) -> bool {
self.file_confirmed
}
/// Indicating whether the last file is skipped
#[inline]
pub fn file_skipped(&self) -> bool {
self.file_skipped
}
/// Indicating whether the whole task is skipped
#[inline]
pub fn job_skipped(&self) -> bool {
self.file_skipped() && self.files.len() == 1
}
/// Check whether the job is completed after `read` returns `None`
/// This is a helper function which gives additional lifecycle when the job reads `None`.
/// If returns `true`, it means we can delete the job automatically. `False` otherwise.
///
/// [`Note`]
/// Conditions:
/// 1. Files are not waiting for confirmation by peers.
#[inline]
pub fn job_completed(&self) -> bool {
// has no error, Condition 2
!self.enable_overwrite_detection || (!self.file_confirmed && !self.file_is_waiting)
}
/// Get job error message, useful for getting status when job had finished
pub fn job_error(&self) -> Option<String> {
if self.job_skipped() {
return Some("skipped".to_string());
}
None
}
pub fn set_file_skipped(&mut self) -> bool {
log::debug!("skip file {} in job {}", self.file_num, self.id);
self.file.take();
self.set_file_confirmed(false);
self.set_file_is_waiting(false);
self.file_num += 1;
self.file_skipped = true;
true
}
pub fn confirm(&mut self, r: &FileTransferSendConfirmRequest) -> bool {
if self.file_num() != r.file_num {
log::info!("file num truncated, ignoring");
} else {
match r.union {
Some(file_transfer_send_confirm_request::Union::Skip(s)) => {
if s {
self.set_file_skipped();
} else {
self.set_file_confirmed(true);
}
}
Some(file_transfer_send_confirm_request::Union::OffsetBlk(_offset)) => {
self.set_file_confirmed(true);
}
_ => {}
}
}
true
}
#[inline]
pub fn gen_meta(&self) -> TransferJobMeta {
TransferJobMeta {
id: self.id,
remote: self.remote.to_string(),
to: self.path.to_string_lossy().to_string(),
file_num: self.file_num,
show_hidden: self.show_hidden,
is_remote: self.is_remote,
}
}
}
#[inline]
pub fn new_error<T: std::string::ToString>(id: i32, err: T, file_num: i32) -> Message {
let mut resp = FileResponse::new();
resp.set_error(FileTransferError {
id,
error: err.to_string(),
file_num,
..Default::default()
});
let mut msg_out = Message::new();
msg_out.set_file_response(resp);
msg_out
}
#[inline]
pub fn new_dir(id: i32, path: String, files: Vec<FileEntry>) -> Message {
let mut resp = FileResponse::new();
resp.set_dir(FileDirectory {
id,
path,
entries: files,
..Default::default()
});
let mut msg_out = Message::new();
msg_out.set_file_response(resp);
msg_out
}
#[inline]
pub fn new_block(block: FileTransferBlock) -> Message {
let mut resp = FileResponse::new();
resp.set_block(block);
let mut msg_out = Message::new();
msg_out.set_file_response(resp);
msg_out
}
#[inline]
pub fn new_send_confirm(r: FileTransferSendConfirmRequest) -> Message {
let mut msg_out = Message::new();
let mut action = FileAction::new();
action.set_send_confirm(r);
msg_out.set_file_action(action);
msg_out
}
#[inline]
pub fn new_receive(id: i32, path: String, file_num: i32, files: Vec<FileEntry>) -> Message {
let mut action = FileAction::new();
action.set_receive(FileTransferReceiveRequest {
id,
path,
files,
file_num,
..Default::default()
});
let mut msg_out = Message::new();
msg_out.set_file_action(action);
msg_out
}
#[inline]
pub fn new_send(id: i32, path: String, file_num: i32, include_hidden: bool) -> Message {
log::info!("new send: {},id : {}", path, id);
let mut action = FileAction::new();
action.set_send(FileTransferSendRequest {
id,
path,
include_hidden,
file_num,
..Default::default()
});
let mut msg_out = Message::new();
msg_out.set_file_action(action);
msg_out
}
#[inline]
pub fn new_done(id: i32, file_num: i32) -> Message {
let mut resp = FileResponse::new();
resp.set_done(FileTransferDone {
id,
file_num,
..Default::default()
});
let mut msg_out = Message::new();
msg_out.set_file_response(resp);
msg_out
}
#[inline]
pub fn remove_job(id: i32, jobs: &mut Vec<TransferJob>) {
*jobs = jobs.drain(0..).filter(|x| x.id() != id).collect();
}
#[inline]
pub fn get_job(id: i32, jobs: &mut [TransferJob]) -> Option<&mut TransferJob> {
jobs.iter_mut().find(|x| x.id() == id)
}
pub async fn handle_read_jobs(
jobs: &mut Vec<TransferJob>,
stream: &mut crate::Stream,
) -> ResultType<()> {
let mut finished = Vec::new();
for job in jobs.iter_mut() {
if job.is_last_job {
continue;
}
match job.read(stream).await {
Err(err) => {
stream
.send(&new_error(job.id(), err, job.file_num()))
.await?;
}
Ok(Some(block)) => {
stream.send(&new_block(block)).await?;
}
Ok(None) => {
if job.job_completed() {
finished.push(job.id());
let err = job.job_error();
if err.is_some() {
stream
.send(&new_error(job.id(), err.unwrap(), job.file_num()))
.await?;
} else {
stream.send(&new_done(job.id(), job.file_num())).await?;
}
} else {
// waiting confirmation.
}
}
}
}
for id in finished {
remove_job(id, jobs);
}
Ok(())
}
pub fn remove_all_empty_dir(path: &PathBuf) -> ResultType<()> {
let fd = read_dir(path, true)?;
for entry in fd.entries.iter() {
match entry.entry_type.enum_value() {
Ok(FileType::Dir) => {
remove_all_empty_dir(&path.join(&entry.name)).ok();
}
Ok(FileType::DirLink) | Ok(FileType::FileLink) => {
std::fs::remove_file(path.join(&entry.name)).ok();
}
_ => {}
}
}
std::fs::remove_dir(path).ok();
Ok(())
}
#[inline]
pub fn remove_file(file: &str) -> ResultType<()> {
std::fs::remove_file(get_path(file))?;
Ok(())
}
#[inline]
pub fn create_dir(dir: &str) -> ResultType<()> {
std::fs::create_dir_all(get_path(dir))?;
Ok(())
}
#[inline]
pub fn transform_windows_path(entries: &mut Vec<FileEntry>) {
for entry in entries {
entry.name = entry.name.replace('\\', "/");
}
}
pub enum DigestCheckResult {
IsSame,
NeedConfirm(FileTransferDigest),
NoSuchFile,
}
#[inline]
pub fn is_write_need_confirmation(
file_path: &str,
digest: &FileTransferDigest,
) -> ResultType<DigestCheckResult> {
let path = Path::new(file_path);
if path.exists() && path.is_file() {
let metadata = std::fs::metadata(path)?;
let modified_time = metadata.modified()?;
let remote_mt = Duration::from_secs(digest.last_modified);
let local_mt = modified_time.duration_since(UNIX_EPOCH)?;
if remote_mt == local_mt && digest.file_size == metadata.len() {
return Ok(DigestCheckResult::IsSame);
}
Ok(DigestCheckResult::NeedConfirm(FileTransferDigest {
id: digest.id,
file_num: digest.file_num,
last_modified: local_mt.as_secs(),
file_size: metadata.len(),
..Default::default()
}))
} else {
Ok(DigestCheckResult::NoSuchFile)
}
}

View File

@@ -1,39 +0,0 @@
use std::{fmt, slice::Iter, str::FromStr};
use crate::protos::message::KeyboardMode;
impl fmt::Display for KeyboardMode {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self {
KeyboardMode::Legacy => write!(f, "legacy"),
KeyboardMode::Map => write!(f, "map"),
KeyboardMode::Translate => write!(f, "translate"),
KeyboardMode::Auto => write!(f, "auto"),
}
}
}
impl FromStr for KeyboardMode {
type Err = ();
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s {
"legacy" => Ok(KeyboardMode::Legacy),
"map" => Ok(KeyboardMode::Map),
"translate" => Ok(KeyboardMode::Translate),
"auto" => Ok(KeyboardMode::Auto),
_ => Err(()),
}
}
}
impl KeyboardMode {
pub fn iter() -> Iter<'static, KeyboardMode> {
static KEYBOARD_MODES: [KeyboardMode; 4] = [
KeyboardMode::Legacy,
KeyboardMode::Map,
KeyboardMode::Translate,
KeyboardMode::Auto,
];
KEYBOARD_MODES.iter()
}
}

View File

@@ -1,391 +0,0 @@
pub mod compress;
pub mod platform;
pub mod protos;
pub use bytes;
use config::Config;
pub use futures;
pub use protobuf;
pub use protos::message as message_proto;
pub use protos::rendezvous as rendezvous_proto;
use std::{
fs::File,
io::{self, BufRead},
net::{IpAddr, Ipv4Addr, SocketAddr, SocketAddrV4},
path::Path,
time::{self, SystemTime, UNIX_EPOCH},
};
pub use tokio;
pub use tokio_util;
pub mod socket_client;
pub mod tcp;
pub mod udp;
pub use env_logger;
pub use log;
pub mod bytes_codec;
#[cfg(feature = "quic")]
pub mod quic;
pub use anyhow::{self, bail};
pub use futures_util;
pub mod config;
pub mod fs;
pub use lazy_static;
#[cfg(not(any(target_os = "android", target_os = "ios")))]
pub use mac_address;
pub use rand;
pub use regex;
pub use sodiumoxide;
pub use tokio_socks;
pub use tokio_socks::IntoTargetAddr;
pub use tokio_socks::TargetAddr;
pub mod password_security;
pub use chrono;
pub use directories_next;
pub mod keyboard;
#[cfg(feature = "quic")]
pub type Stream = quic::Connection;
#[cfg(not(feature = "quic"))]
pub type Stream = tcp::FramedStream;
#[inline]
pub async fn sleep(sec: f32) {
tokio::time::sleep(time::Duration::from_secs_f32(sec)).await;
}
#[macro_export]
macro_rules! allow_err {
($e:expr) => {
if let Err(err) = $e {
log::debug!(
"{:?}, {}:{}:{}:{}",
err,
module_path!(),
file!(),
line!(),
column!()
);
} else {
}
};
($e:expr, $($arg:tt)*) => {
if let Err(err) = $e {
log::debug!(
"{:?}, {}, {}:{}:{}:{}",
err,
format_args!($($arg)*),
module_path!(),
file!(),
line!(),
column!()
);
} else {
}
};
}
#[inline]
pub fn timeout<T: std::future::Future>(ms: u64, future: T) -> tokio::time::Timeout<T> {
tokio::time::timeout(std::time::Duration::from_millis(ms), future)
}
pub type ResultType<F, E = anyhow::Error> = anyhow::Result<F, E>;
/// Certain router and firewalls scan the packet and if they
/// find an IP address belonging to their pool that they use to do the NAT mapping/translation, so here we mangle the ip address
pub struct AddrMangle();
#[inline]
pub fn try_into_v4(addr: SocketAddr) -> SocketAddr {
match addr {
SocketAddr::V6(v6) if !addr.ip().is_loopback() => {
if let Some(v4) = v6.ip().to_ipv4() {
SocketAddr::new(IpAddr::V4(v4), addr.port())
} else {
addr
}
}
_ => addr,
}
}
impl AddrMangle {
pub fn encode(addr: SocketAddr) -> Vec<u8> {
// not work with [:1]:<port>
let addr = try_into_v4(addr);
match addr {
SocketAddr::V4(addr_v4) => {
let tm = (SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_micros() as u32) as u128;
let ip = u32::from_le_bytes(addr_v4.ip().octets()) as u128;
let port = addr.port() as u128;
let v = ((ip + tm) << 49) | (tm << 17) | (port + (tm & 0xFFFF));
let bytes = v.to_le_bytes();
let mut n_padding = 0;
for i in bytes.iter().rev() {
if i == &0u8 {
n_padding += 1;
} else {
break;
}
}
bytes[..(16 - n_padding)].to_vec()
}
SocketAddr::V6(addr_v6) => {
let mut x = addr_v6.ip().octets().to_vec();
let port: [u8; 2] = addr_v6.port().to_le_bytes();
x.push(port[0]);
x.push(port[1]);
x
}
}
}
pub fn decode(bytes: &[u8]) -> SocketAddr {
use std::convert::TryInto;
if bytes.len() > 16 {
if bytes.len() != 18 {
return Config::get_any_listen_addr(false);
}
let tmp: [u8; 2] = bytes[16..].try_into().unwrap();
let port = u16::from_le_bytes(tmp);
let tmp: [u8; 16] = bytes[..16].try_into().unwrap();
let ip = std::net::Ipv6Addr::from(tmp);
return SocketAddr::new(IpAddr::V6(ip), port);
}
let mut padded = [0u8; 16];
padded[..bytes.len()].copy_from_slice(bytes);
let number = u128::from_le_bytes(padded);
let tm = (number >> 17) & (u32::max_value() as u128);
let ip = (((number >> 49) - tm) as u32).to_le_bytes();
let port = (number & 0xFFFFFF) - (tm & 0xFFFF);
SocketAddr::V4(SocketAddrV4::new(
Ipv4Addr::new(ip[0], ip[1], ip[2], ip[3]),
port as u16,
))
}
}
pub fn get_version_from_url(url: &str) -> String {
let n = url.chars().count();
let a = url.chars().rev().position(|x| x == '-');
if let Some(a) = a {
let b = url.chars().rev().position(|x| x == '.');
if let Some(b) = b {
if a > b {
if url
.chars()
.skip(n - b)
.collect::<String>()
.parse::<i32>()
.is_ok()
{
return url.chars().skip(n - a).collect();
} else {
return url.chars().skip(n - a).take(a - b - 1).collect();
}
} else {
return url.chars().skip(n - a).collect();
}
}
}
"".to_owned()
}
pub fn gen_version() {
println!("cargo:rerun-if-changed=Cargo.toml");
use std::io::prelude::*;
let mut file = File::create("./src/version.rs").unwrap();
for line in read_lines("Cargo.toml").unwrap().flatten() {
let ab: Vec<&str> = line.split('=').map(|x| x.trim()).collect();
if ab.len() == 2 && ab[0] == "version" {
file.write_all(format!("pub const VERSION: &str = {};\n", ab[1]).as_bytes())
.ok();
break;
}
}
// generate build date
let build_date = format!("{}", chrono::Local::now().format("%Y-%m-%d %H:%M"));
file.write_all(
format!("#[allow(dead_code)]\npub const BUILD_DATE: &str = \"{build_date}\";\n").as_bytes(),
)
.ok();
file.sync_all().ok();
}
fn read_lines<P>(filename: P) -> io::Result<io::Lines<io::BufReader<File>>>
where
P: AsRef<Path>,
{
let file = File::open(filename)?;
Ok(io::BufReader::new(file).lines())
}
pub fn is_valid_custom_id(id: &str) -> bool {
regex::Regex::new(r"^[a-zA-Z]\w{5,15}$")
.unwrap()
.is_match(id)
}
pub fn get_version_number(v: &str) -> i64 {
let mut n = 0;
for x in v.split('.') {
n = n * 1000 + x.parse::<i64>().unwrap_or(0);
}
n
}
pub fn get_modified_time(path: &std::path::Path) -> SystemTime {
std::fs::metadata(path)
.map(|m| m.modified().unwrap_or(UNIX_EPOCH))
.unwrap_or(UNIX_EPOCH)
}
pub fn get_created_time(path: &std::path::Path) -> SystemTime {
std::fs::metadata(path)
.map(|m| m.created().unwrap_or(UNIX_EPOCH))
.unwrap_or(UNIX_EPOCH)
}
pub fn get_exe_time() -> SystemTime {
std::env::current_exe().map_or(UNIX_EPOCH, |path| {
let m = get_modified_time(&path);
let c = get_created_time(&path);
if m > c {
m
} else {
c
}
})
}
pub fn get_uuid() -> Vec<u8> {
#[cfg(not(any(target_os = "android", target_os = "ios")))]
if let Ok(id) = machine_uid::get() {
return id.into();
}
Config::get_key_pair().1
}
#[inline]
pub fn get_time() -> i64 {
std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.map(|d| d.as_millis())
.unwrap_or(0) as _
}
#[inline]
pub fn is_ipv4_str(id: &str) -> bool {
regex::Regex::new(r"^\d+\.\d+\.\d+\.\d+(:\d+)?$")
.unwrap()
.is_match(id)
}
#[inline]
pub fn is_ipv6_str(id: &str) -> bool {
regex::Regex::new(r"^((([a-fA-F0-9]{1,4}:{1,2})+[a-fA-F0-9]{1,4})|(\[([a-fA-F0-9]{1,4}:{1,2})+[a-fA-F0-9]{1,4}\]:\d+))$")
.unwrap()
.is_match(id)
}
#[inline]
pub fn is_ip_str(id: &str) -> bool {
is_ipv4_str(id) || is_ipv6_str(id)
}
#[inline]
pub fn is_domain_port_str(id: &str) -> bool {
// modified regex for RFC1123 hostname. check https://stackoverflow.com/a/106223 for original version for hostname.
// according to [TLD List](https://data.iana.org/TLD/tlds-alpha-by-domain.txt) version 2023011700,
// there is no digits in TLD, and length is 2~63.
regex::Regex::new(
r"(?i)^([a-z0-9]([a-z0-9-]{0,61}[a-z0-9])?\.)+[a-z][a-z-]{0,61}[a-z]:\d{1,5}$",
)
.unwrap()
.is_match(id)
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn test_mangle() {
let addr = SocketAddr::V4(SocketAddrV4::new(Ipv4Addr::new(192, 168, 16, 32), 21116));
assert_eq!(addr, AddrMangle::decode(&AddrMangle::encode(addr)));
let addr = "[2001:db8::1]:8080".parse::<SocketAddr>().unwrap();
assert_eq!(addr, AddrMangle::decode(&AddrMangle::encode(addr)));
let addr = "[2001:db8:ff::1111]:80".parse::<SocketAddr>().unwrap();
assert_eq!(addr, AddrMangle::decode(&AddrMangle::encode(addr)));
}
#[test]
fn test_allow_err() {
allow_err!(Err("test err") as Result<(), &str>);
allow_err!(
Err("test err with msg") as Result<(), &str>,
"prompt {}",
"failed"
);
}
#[test]
fn test_ipv6() {
assert!(is_ipv6_str("1:2:3"));
assert!(is_ipv6_str("[ab:2:3]:12"));
assert!(is_ipv6_str("[ABEF:2a:3]:12"));
assert!(!is_ipv6_str("[ABEG:2a:3]:12"));
assert!(!is_ipv6_str("1[ab:2:3]:12"));
assert!(!is_ipv6_str("1.1.1.1"));
assert!(is_ip_str("1.1.1.1"));
assert!(!is_ipv6_str("1:2:"));
assert!(is_ipv6_str("1:2::0"));
assert!(is_ipv6_str("[1:2::0]:1"));
assert!(!is_ipv6_str("[1:2::0]:"));
assert!(!is_ipv6_str("1:2::0]:1"));
}
#[test]
fn test_hostname_port() {
assert!(!is_domain_port_str("a:12"));
assert!(!is_domain_port_str("a.b.c:12"));
assert!(is_domain_port_str("test.com:12"));
assert!(is_domain_port_str("test-UPPER.com:12"));
assert!(is_domain_port_str("some-other.domain.com:12"));
assert!(!is_domain_port_str("under_score:12"));
assert!(!is_domain_port_str("a@bc:12"));
assert!(!is_domain_port_str("1.1.1.1:12"));
assert!(!is_domain_port_str("1.2.3:12"));
assert!(!is_domain_port_str("1.2.3.45:12"));
assert!(!is_domain_port_str("a.b.c:123456"));
assert!(!is_domain_port_str("---:12"));
assert!(!is_domain_port_str(".:12"));
// todo: should we also check for these edge cases?
// out-of-range port
assert!(is_domain_port_str("test.com:0"));
assert!(is_domain_port_str("test.com:98989"));
}
#[test]
fn test_mangle2() {
let addr = "[::ffff:127.0.0.1]:8080".parse().unwrap();
let addr_v4 = "127.0.0.1:8080".parse().unwrap();
assert_eq!(AddrMangle::decode(&AddrMangle::encode(addr)), addr_v4);
assert_eq!(
AddrMangle::decode(&AddrMangle::encode("[::127.0.0.1]:8080".parse().unwrap())),
addr_v4
);
assert_eq!(AddrMangle::decode(&AddrMangle::encode(addr_v4)), addr_v4);
let addr_v6 = "[ef::fe]:8080".parse().unwrap();
assert_eq!(AddrMangle::decode(&AddrMangle::encode(addr_v6)), addr_v6);
let addr_v6 = "[::1]:8080".parse().unwrap();
assert_eq!(AddrMangle::decode(&AddrMangle::encode(addr_v6)), addr_v6);
}
}

View File

@@ -1,242 +0,0 @@
use crate::config::Config;
use sodiumoxide::base64;
use std::sync::{Arc, RwLock};
lazy_static::lazy_static! {
pub static ref TEMPORARY_PASSWORD:Arc<RwLock<String>> = Arc::new(RwLock::new(Config::get_auto_password(temporary_password_length())));
}
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
enum VerificationMethod {
OnlyUseTemporaryPassword,
OnlyUsePermanentPassword,
UseBothPasswords,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum ApproveMode {
Both,
Password,
Click,
}
// Should only be called in server
pub fn update_temporary_password() {
*TEMPORARY_PASSWORD.write().unwrap() = Config::get_auto_password(temporary_password_length());
}
// Should only be called in server
pub fn temporary_password() -> String {
TEMPORARY_PASSWORD.read().unwrap().clone()
}
fn verification_method() -> VerificationMethod {
let method = Config::get_option("verification-method");
if method == "use-temporary-password" {
VerificationMethod::OnlyUseTemporaryPassword
} else if method == "use-permanent-password" {
VerificationMethod::OnlyUsePermanentPassword
} else {
VerificationMethod::UseBothPasswords // default
}
}
pub fn temporary_password_length() -> usize {
let length = Config::get_option("temporary-password-length");
if length == "8" {
8
} else if length == "10" {
10
} else {
6 // default
}
}
pub fn temporary_enabled() -> bool {
verification_method() != VerificationMethod::OnlyUsePermanentPassword
}
pub fn permanent_enabled() -> bool {
verification_method() != VerificationMethod::OnlyUseTemporaryPassword
}
pub fn has_valid_password() -> bool {
temporary_enabled() && !temporary_password().is_empty()
|| permanent_enabled() && !Config::get_permanent_password().is_empty()
}
pub fn approve_mode() -> ApproveMode {
let mode = Config::get_option("approve-mode");
if mode == "password" {
ApproveMode::Password
} else if mode == "click" {
ApproveMode::Click
} else {
ApproveMode::Both
}
}
pub fn hide_cm() -> bool {
approve_mode() == ApproveMode::Password
&& verification_method() == VerificationMethod::OnlyUsePermanentPassword
&& !Config::get_option("allow-hide-cm").is_empty()
}
const VERSION_LEN: usize = 2;
pub fn encrypt_str_or_original(s: &str, version: &str) -> String {
if decrypt_str_or_original(s, version).1 {
log::error!("Duplicate encryption!");
return s.to_owned();
}
if version == "00" {
if let Ok(s) = encrypt(s.as_bytes()) {
return version.to_owned() + &s;
}
}
s.to_owned()
}
// String: password
// bool: whether decryption is successful
// bool: whether should store to re-encrypt when load
pub fn decrypt_str_or_original(s: &str, current_version: &str) -> (String, bool, bool) {
if s.len() > VERSION_LEN {
let version = &s[..VERSION_LEN];
if version == "00" {
if let Ok(v) = decrypt(s[VERSION_LEN..].as_bytes()) {
return (
String::from_utf8_lossy(&v).to_string(),
true,
version != current_version,
);
}
}
}
(s.to_owned(), false, !s.is_empty())
}
pub fn encrypt_vec_or_original(v: &[u8], version: &str) -> Vec<u8> {
if decrypt_vec_or_original(v, version).1 {
log::error!("Duplicate encryption!");
return v.to_owned();
}
if version == "00" {
if let Ok(s) = encrypt(v) {
let mut version = version.to_owned().into_bytes();
version.append(&mut s.into_bytes());
return version;
}
}
v.to_owned()
}
// Vec<u8>: password
// bool: whether decryption is successful
// bool: whether should store to re-encrypt when load
pub fn decrypt_vec_or_original(v: &[u8], current_version: &str) -> (Vec<u8>, bool, bool) {
if v.len() > VERSION_LEN {
let version = String::from_utf8_lossy(&v[..VERSION_LEN]);
if version == "00" {
if let Ok(v) = decrypt(&v[VERSION_LEN..]) {
return (v, true, version != current_version);
}
}
}
(v.to_owned(), false, !v.is_empty())
}
fn encrypt(v: &[u8]) -> Result<String, ()> {
if !v.is_empty() {
symmetric_crypt(v, true).map(|v| base64::encode(v, base64::Variant::Original))
} else {
Err(())
}
}
fn decrypt(v: &[u8]) -> Result<Vec<u8>, ()> {
if !v.is_empty() {
base64::decode(v, base64::Variant::Original).and_then(|v| symmetric_crypt(&v, false))
} else {
Err(())
}
}
fn symmetric_crypt(data: &[u8], encrypt: bool) -> Result<Vec<u8>, ()> {
use sodiumoxide::crypto::secretbox;
use std::convert::TryInto;
let mut keybuf = crate::get_uuid();
keybuf.resize(secretbox::KEYBYTES, 0);
let key = secretbox::Key(keybuf.try_into().map_err(|_| ())?);
let nonce = secretbox::Nonce([0; secretbox::NONCEBYTES]);
if encrypt {
Ok(secretbox::seal(data, &nonce, &key))
} else {
secretbox::open(data, &nonce, &key)
}
}
mod test {
#[test]
fn test() {
use super::*;
let version = "00";
println!("test str");
let data = "Hello World";
let encrypted = encrypt_str_or_original(data, version);
let (decrypted, succ, store) = decrypt_str_or_original(&encrypted, version);
println!("data: {data}");
println!("encrypted: {encrypted}");
println!("decrypted: {decrypted}");
assert_eq!(data, decrypted);
assert_eq!(version, &encrypted[..2]);
assert!(succ);
assert!(!store);
let (_, _, store) = decrypt_str_or_original(&encrypted, "99");
assert!(store);
assert!(!decrypt_str_or_original(&decrypted, version).1);
assert_eq!(encrypt_str_or_original(&encrypted, version), encrypted);
println!("test vec");
let data: Vec<u8> = vec![1, 2, 3, 4, 5, 6];
let encrypted = encrypt_vec_or_original(&data, version);
let (decrypted, succ, store) = decrypt_vec_or_original(&encrypted, version);
println!("data: {data:?}");
println!("encrypted: {encrypted:?}");
println!("decrypted: {decrypted:?}");
assert_eq!(data, decrypted);
assert_eq!(version.as_bytes(), &encrypted[..2]);
assert!(!store);
assert!(succ);
let (_, _, store) = decrypt_vec_or_original(&encrypted, "99");
assert!(store);
assert!(!decrypt_vec_or_original(&decrypted, version).1);
assert_eq!(encrypt_vec_or_original(&encrypted, version), encrypted);
println!("test original");
let data = version.to_string() + "Hello World";
let (decrypted, succ, store) = decrypt_str_or_original(&data, version);
assert_eq!(data, decrypted);
assert!(store);
assert!(!succ);
let verbytes = version.as_bytes();
let data: Vec<u8> = vec![verbytes[0], verbytes[1], 1, 2, 3, 4, 5, 6];
let (decrypted, succ, store) = decrypt_vec_or_original(&data, version);
assert_eq!(data, decrypted);
assert!(store);
assert!(!succ);
let (_, succ, store) = decrypt_str_or_original("", version);
assert!(!store);
assert!(!succ);
let (_, succ, store) = decrypt_vec_or_original(&[], version);
assert!(!store);
assert!(!succ);
}
}

View File

@@ -1,157 +0,0 @@
use crate::ResultType;
lazy_static::lazy_static! {
pub static ref DISTRO: Distro = Distro::new();
}
pub struct Distro {
pub name: String,
pub version_id: String,
}
impl Distro {
fn new() -> Self {
let name = run_cmds("awk -F'=' '/^NAME=/ {print $2}' /etc/os-release".to_owned())
.unwrap_or_default()
.trim()
.trim_matches('"')
.to_string();
let version_id =
run_cmds("awk -F'=' '/^VERSION_ID=/ {print $2}' /etc/os-release".to_owned())
.unwrap_or_default()
.trim()
.trim_matches('"')
.to_string();
Self { name, version_id }
}
}
pub fn get_display_server() -> String {
let mut session = get_values_of_seat0([0].to_vec())[0].clone();
if session.is_empty() {
// loginctl has not given the expected output. try something else.
if let Ok(sid) = std::env::var("XDG_SESSION_ID") {
// could also execute "cat /proc/self/sessionid"
session = sid;
}
if session.is_empty() {
session = run_cmds("cat /proc/self/sessionid".to_owned()).unwrap_or_default();
}
}
get_display_server_of_session(&session)
}
fn get_display_server_of_session(session: &str) -> String {
let mut display_server = if let Ok(output) =
run_loginctl(Some(vec!["show-session", "-p", "Type", session]))
// Check session type of the session
{
let display_server = String::from_utf8_lossy(&output.stdout)
.replace("Type=", "")
.trim_end()
.into();
if display_server == "tty" {
// If the type is tty...
if let Ok(output) = run_loginctl(Some(vec!["show-session", "-p", "TTY", session]))
// Get the tty number
{
let tty: String = String::from_utf8_lossy(&output.stdout)
.replace("TTY=", "")
.trim_end()
.into();
if let Ok(xorg_results) = run_cmds(format!("ps -e | grep \"{tty}.\\\\+Xorg\""))
// And check if Xorg is running on that tty
{
if xorg_results.trim_end() != "" {
// If it is, manually return "x11", otherwise return tty
return "x11".to_owned();
}
}
}
}
display_server
} else {
"".to_owned()
};
if display_server.is_empty() || display_server == "tty" {
// loginctl has not given the expected output. try something else.
if let Ok(sestype) = std::env::var("XDG_SESSION_TYPE") {
display_server = sestype;
}
}
// If the session is not a tty, then just return the type as usual
display_server
}
pub fn get_values_of_seat0(indices: Vec<usize>) -> Vec<String> {
if let Ok(output) = run_loginctl(None) {
for line in String::from_utf8_lossy(&output.stdout).lines() {
if line.contains("seat0") {
if let Some(sid) = line.split_whitespace().next() {
if is_active(sid) {
return indices
.into_iter()
.map(|idx| line.split_whitespace().nth(idx).unwrap_or("").to_owned())
.collect::<Vec<String>>();
}
}
}
}
}
// some case, there is no seat0 https://github.com/rustdesk/rustdesk/issues/73
if let Ok(output) = run_loginctl(None) {
for line in String::from_utf8_lossy(&output.stdout).lines() {
if let Some(sid) = line.split_whitespace().next() {
let d = get_display_server_of_session(sid);
if is_active(sid) && d != "tty" {
return indices
.into_iter()
.map(|idx| line.split_whitespace().nth(idx).unwrap_or("").to_owned())
.collect::<Vec<String>>();
}
}
}
}
return indices
.iter()
.map(|_x| "".to_owned())
.collect::<Vec<String>>();
}
fn is_active(sid: &str) -> bool {
if let Ok(output) = run_loginctl(Some(vec!["show-session", "-p", "State", sid])) {
String::from_utf8_lossy(&output.stdout).contains("active")
} else {
false
}
}
pub fn run_cmds(cmds: String) -> ResultType<String> {
let output = std::process::Command::new("sh")
.args(vec!["-c", &cmds])
.output()?;
Ok(String::from_utf8_lossy(&output.stdout).to_string())
}
#[cfg(not(feature = "flatpak"))]
fn run_loginctl(args: Option<Vec<&str>>) -> std::io::Result<std::process::Output> {
let mut cmd = std::process::Command::new("loginctl");
if let Some(a) = args {
return cmd.args(a).output();
}
cmd.output()
}
#[cfg(feature = "flatpak")]
fn run_loginctl(args: Option<Vec<&str>>) -> std::io::Result<std::process::Output> {
let mut l_args = String::from("loginctl");
if let Some(a) = args {
l_args = format!("{} {}", l_args, a.join(" "));
}
std::process::Command::new("flatpak-spawn")
.args(vec![String::from("--host"), l_args])
.output()
}

View File

@@ -1,2 +0,0 @@
#[cfg(target_os = "linux")]
pub mod linux;

View File

@@ -1 +0,0 @@
include!(concat!(env!("OUT_DIR"), "/protos/mod.rs"));

View File

@@ -1,135 +0,0 @@
use crate::{allow_err, anyhow::anyhow, ResultType};
use protobuf::Message;
use std::{net::SocketAddr, sync::Arc};
use tokio::{self, stream::StreamExt, sync::mpsc};
const QUIC_HBB: &[&[u8]] = &[b"hbb"];
const SERVER_NAME: &str = "hbb";
type Sender = mpsc::UnboundedSender<Value>;
type Receiver = mpsc::UnboundedReceiver<Value>;
pub fn new_server(socket: std::net::UdpSocket) -> ResultType<(Server, SocketAddr)> {
let mut transport_config = quinn::TransportConfig::default();
transport_config.stream_window_uni(0);
let mut server_config = quinn::ServerConfig::default();
server_config.transport = Arc::new(transport_config);
let mut server_config = quinn::ServerConfigBuilder::new(server_config);
server_config.protocols(QUIC_HBB);
// server_config.enable_keylog();
// server_config.use_stateless_retry(true);
let mut endpoint = quinn::Endpoint::builder();
endpoint.listen(server_config.build());
let (end, incoming) = endpoint.with_socket(socket)?;
Ok((Server { incoming }, end.local_addr()?))
}
pub async fn new_client(local_addr: &SocketAddr, peer: &SocketAddr) -> ResultType<Connection> {
let mut endpoint = quinn::Endpoint::builder();
let mut client_config = quinn::ClientConfigBuilder::default();
client_config.protocols(QUIC_HBB);
//client_config.enable_keylog();
endpoint.default_client_config(client_config.build());
let (endpoint, _) = endpoint.bind(local_addr)?;
let new_conn = endpoint.connect(peer, SERVER_NAME)?.await?;
Connection::new_for_client(new_conn.connection).await
}
pub struct Server {
incoming: quinn::Incoming,
}
impl Server {
#[inline]
pub async fn next(&mut self) -> ResultType<Option<Connection>> {
Connection::new_for_server(&mut self.incoming).await
}
}
pub struct Connection {
conn: quinn::Connection,
tx: quinn::SendStream,
rx: Receiver,
}
type Value = ResultType<Vec<u8>>;
impl Connection {
async fn new_for_server(incoming: &mut quinn::Incoming) -> ResultType<Option<Self>> {
if let Some(conn) = incoming.next().await {
let quinn::NewConnection {
connection: conn,
// uni_streams,
mut bi_streams,
..
} = conn.await?;
let (tx, rx) = mpsc::unbounded_channel::<Value>();
tokio::spawn(async move {
loop {
let stream = bi_streams.next().await;
if let Some(stream) = stream {
let stream = match stream {
Err(e) => {
tx.send(Err(e.into())).ok();
break;
}
Ok(s) => s,
};
let cloned = tx.clone();
tokio::spawn(async move {
allow_err!(handle_request(stream.1, cloned).await);
});
} else {
tx.send(Err(anyhow!("Reset by the peer"))).ok();
break;
}
}
log::info!("Exit connection outer loop");
});
let tx = conn.open_uni().await?;
Ok(Some(Self { conn, tx, rx }))
} else {
Ok(None)
}
}
async fn new_for_client(conn: quinn::Connection) -> ResultType<Self> {
let (tx, rx_quic) = conn.open_bi().await?;
let (tx_mpsc, rx) = mpsc::unbounded_channel::<Value>();
tokio::spawn(async move {
allow_err!(handle_request(rx_quic, tx_mpsc).await);
});
Ok(Self { conn, tx, rx })
}
#[inline]
pub async fn next(&mut self) -> Option<Value> {
// None is returned when all Sender halves have dropped,
// indicating that no further values can be sent on the channel.
self.rx.recv().await
}
#[inline]
pub fn remote_address(&self) -> SocketAddr {
self.conn.remote_address()
}
#[inline]
pub async fn send_raw(&mut self, bytes: &[u8]) -> ResultType<()> {
self.tx.write_all(bytes).await?;
Ok(())
}
#[inline]
pub async fn send(&mut self, msg: &dyn Message) -> ResultType<()> {
match msg.write_to_bytes() {
Ok(bytes) => self.send_raw(&bytes).await?,
err => allow_err!(err),
}
Ok(())
}
}
async fn handle_request(rx: quinn::RecvStream, tx: Sender) -> ResultType<()> {
Ok(())
}

View File

@@ -1,281 +0,0 @@
use crate::{
config::{Config, NetworkType},
tcp::FramedStream,
udp::FramedSocket,
ResultType,
};
use anyhow::Context;
use std::net::SocketAddr;
use tokio::net::ToSocketAddrs;
use tokio_socks::{IntoTargetAddr, TargetAddr};
#[inline]
pub fn check_port<T: std::string::ToString>(host: T, port: i32) -> String {
let host = host.to_string();
if crate::is_ipv6_str(&host) {
if host.starts_with('[') {
return host;
}
return format!("[{host}]:{port}");
}
if !host.contains(':') {
return format!("{host}:{port}");
}
host
}
#[inline]
pub fn increase_port<T: std::string::ToString>(host: T, offset: i32) -> String {
let host = host.to_string();
if crate::is_ipv6_str(&host) {
if host.starts_with('[') {
let tmp: Vec<&str> = host.split("]:").collect();
if tmp.len() == 2 {
let port: i32 = tmp[1].parse().unwrap_or(0);
if port > 0 {
return format!("{}]:{}", tmp[0], port + offset);
}
}
}
} else if host.contains(':') {
let tmp: Vec<&str> = host.split(':').collect();
if tmp.len() == 2 {
let port: i32 = tmp[1].parse().unwrap_or(0);
if port > 0 {
return format!("{}:{}", tmp[0], port + offset);
}
}
}
host
}
pub fn test_if_valid_server(host: &str) -> String {
let host = check_port(host, 0);
use std::net::ToSocketAddrs;
match Config::get_network_type() {
NetworkType::Direct => match host.to_socket_addrs() {
Err(err) => err.to_string(),
Ok(_) => "".to_owned(),
},
NetworkType::ProxySocks => match &host.into_target_addr() {
Err(err) => err.to_string(),
Ok(_) => "".to_owned(),
},
}
}
pub trait IsResolvedSocketAddr {
fn resolve(&self) -> Option<&SocketAddr>;
}
impl IsResolvedSocketAddr for SocketAddr {
fn resolve(&self) -> Option<&SocketAddr> {
Some(self)
}
}
impl IsResolvedSocketAddr for String {
fn resolve(&self) -> Option<&SocketAddr> {
None
}
}
impl IsResolvedSocketAddr for &str {
fn resolve(&self) -> Option<&SocketAddr> {
None
}
}
#[inline]
pub async fn connect_tcp<
't,
T: IntoTargetAddr<'t> + ToSocketAddrs + IsResolvedSocketAddr + std::fmt::Display,
>(
target: T,
ms_timeout: u64,
) -> ResultType<FramedStream> {
connect_tcp_local(target, None, ms_timeout).await
}
pub async fn connect_tcp_local<
't,
T: IntoTargetAddr<'t> + ToSocketAddrs + IsResolvedSocketAddr + std::fmt::Display,
>(
target: T,
local: Option<SocketAddr>,
ms_timeout: u64,
) -> ResultType<FramedStream> {
if let Some(conf) = Config::get_socks() {
return FramedStream::connect(
conf.proxy.as_str(),
target,
local,
conf.username.as_str(),
conf.password.as_str(),
ms_timeout,
)
.await;
}
if let Some(target) = target.resolve() {
if let Some(local) = local {
if local.is_ipv6() && target.is_ipv4() {
let target = query_nip_io(target).await?;
return FramedStream::new(target, Some(local), ms_timeout).await;
}
}
}
FramedStream::new(target, local, ms_timeout).await
}
#[inline]
pub fn is_ipv4(target: &TargetAddr<'_>) -> bool {
match target {
TargetAddr::Ip(addr) => addr.is_ipv4(),
_ => true,
}
}
#[inline]
pub async fn query_nip_io(addr: &SocketAddr) -> ResultType<SocketAddr> {
tokio::net::lookup_host(format!("{}.nip.io:{}", addr.ip(), addr.port()))
.await?
.find(|x| x.is_ipv6())
.context("Failed to get ipv6 from nip.io")
}
#[inline]
pub fn ipv4_to_ipv6(addr: String, ipv4: bool) -> String {
if !ipv4 && crate::is_ipv4_str(&addr) {
if let Some(ip) = addr.split(':').next() {
return addr.replace(ip, &format!("{ip}.nip.io"));
}
}
addr
}
async fn test_target(target: &str) -> ResultType<SocketAddr> {
if let Ok(Ok(s)) = super::timeout(1000, tokio::net::TcpStream::connect(target)).await {
if let Ok(addr) = s.peer_addr() {
return Ok(addr);
}
}
tokio::net::lookup_host(target)
.await?
.next()
.context(format!("Failed to look up host for {target}"))
}
#[inline]
pub async fn new_udp_for(
target: &str,
ms_timeout: u64,
) -> ResultType<(FramedSocket, TargetAddr<'static>)> {
let (ipv4, target) = if NetworkType::Direct == Config::get_network_type() {
let addr = test_target(target).await?;
(addr.is_ipv4(), addr.into_target_addr()?)
} else {
(true, target.into_target_addr()?)
};
Ok((
new_udp(Config::get_any_listen_addr(ipv4), ms_timeout).await?,
target.to_owned(),
))
}
async fn new_udp<T: ToSocketAddrs>(local: T, ms_timeout: u64) -> ResultType<FramedSocket> {
match Config::get_socks() {
None => Ok(FramedSocket::new(local).await?),
Some(conf) => {
let socket = FramedSocket::new_proxy(
conf.proxy.as_str(),
local,
conf.username.as_str(),
conf.password.as_str(),
ms_timeout,
)
.await?;
Ok(socket)
}
}
}
pub async fn rebind_udp_for(
target: &str,
) -> ResultType<Option<(FramedSocket, TargetAddr<'static>)>> {
if Config::get_network_type() != NetworkType::Direct {
return Ok(None);
}
let addr = test_target(target).await?;
let v4 = addr.is_ipv4();
Ok(Some((
FramedSocket::new(Config::get_any_listen_addr(v4)).await?,
addr.into_target_addr()?.to_owned(),
)))
}
#[cfg(test)]
mod tests {
use std::net::ToSocketAddrs;
use super::*;
#[test]
fn test_nat64() {
test_nat64_async();
}
#[tokio::main(flavor = "current_thread")]
async fn test_nat64_async() {
assert_eq!(ipv4_to_ipv6("1.1.1.1".to_owned(), true), "1.1.1.1");
assert_eq!(ipv4_to_ipv6("1.1.1.1".to_owned(), false), "1.1.1.1.nip.io");
assert_eq!(
ipv4_to_ipv6("1.1.1.1:8080".to_owned(), false),
"1.1.1.1.nip.io:8080"
);
assert_eq!(
ipv4_to_ipv6("rustdesk.com".to_owned(), false),
"rustdesk.com"
);
if ("rustdesk.com:80")
.to_socket_addrs()
.unwrap()
.next()
.unwrap()
.is_ipv6()
{
assert!(query_nip_io(&"1.1.1.1:80".parse().unwrap())
.await
.unwrap()
.is_ipv6());
return;
}
assert!(query_nip_io(&"1.1.1.1:80".parse().unwrap()).await.is_err());
}
#[test]
fn test_test_if_valid_server() {
assert!(!test_if_valid_server("a").is_empty());
// on Linux, "1" is resolved to "0.0.0.1"
assert!(test_if_valid_server("1.1.1.1").is_empty());
assert!(test_if_valid_server("1.1.1.1:1").is_empty());
}
#[test]
fn test_check_port() {
assert_eq!(check_port("[1:2]:12", 32), "[1:2]:12");
assert_eq!(check_port("1:2", 32), "[1:2]:32");
assert_eq!(check_port("z1:2", 32), "z1:2");
assert_eq!(check_port("1.1.1.1", 32), "1.1.1.1:32");
assert_eq!(check_port("1.1.1.1:32", 32), "1.1.1.1:32");
assert_eq!(check_port("test.com:32", 0), "test.com:32");
assert_eq!(increase_port("[1:2]:12", 1), "[1:2]:13");
assert_eq!(increase_port("1.2.2.4:12", 1), "1.2.2.4:13");
assert_eq!(increase_port("1.2.2.4", 1), "1.2.2.4");
assert_eq!(increase_port("test.com", 1), "test.com");
assert_eq!(increase_port("test.com:13", 4), "test.com:17");
assert_eq!(increase_port("1:13", 4), "1:13");
assert_eq!(increase_port("22:1:13", 4), "22:1:13");
assert_eq!(increase_port("z1:2", 1), "z1:3");
}
}

View File

@@ -1,325 +0,0 @@
use crate::{bail, bytes_codec::BytesCodec, ResultType};
use anyhow::Context as AnyhowCtx;
use bytes::{BufMut, Bytes, BytesMut};
use futures::{SinkExt, StreamExt};
use protobuf::Message;
use sodiumoxide::crypto::secretbox::{self, Key, Nonce};
use std::{
io::{self, Error, ErrorKind},
net::{IpAddr, Ipv4Addr, Ipv6Addr, SocketAddr},
ops::{Deref, DerefMut},
pin::Pin,
task::{Context, Poll},
};
use tokio::{
io::{AsyncRead, AsyncWrite, ReadBuf},
net::{lookup_host, TcpListener, TcpSocket, ToSocketAddrs},
};
use tokio_socks::{tcp::Socks5Stream, IntoTargetAddr, ToProxyAddrs};
use tokio_util::codec::Framed;
pub trait TcpStreamTrait: AsyncRead + AsyncWrite + Unpin {}
pub struct DynTcpStream(Box<dyn TcpStreamTrait + Send + Sync>);
pub struct FramedStream(
Framed<DynTcpStream, BytesCodec>,
SocketAddr,
Option<(Key, u64, u64)>,
u64,
);
impl Deref for FramedStream {
type Target = Framed<DynTcpStream, BytesCodec>;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl DerefMut for FramedStream {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.0
}
}
impl Deref for DynTcpStream {
type Target = Box<dyn TcpStreamTrait + Send + Sync>;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl DerefMut for DynTcpStream {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.0
}
}
fn new_socket(addr: std::net::SocketAddr, reuse: bool) -> Result<TcpSocket, std::io::Error> {
let socket = match addr {
std::net::SocketAddr::V4(..) => TcpSocket::new_v4()?,
std::net::SocketAddr::V6(..) => TcpSocket::new_v6()?,
};
if reuse {
// windows has no reuse_port, but it's reuse_address
// almost equals to unix's reuse_port + reuse_address,
// though may introduce nondeterministic behavior
#[cfg(unix)]
socket.set_reuseport(true)?;
socket.set_reuseaddr(true)?;
}
socket.bind(addr)?;
Ok(socket)
}
impl FramedStream {
pub async fn new<T: ToSocketAddrs + std::fmt::Display>(
remote_addr: T,
local_addr: Option<SocketAddr>,
ms_timeout: u64,
) -> ResultType<Self> {
for remote_addr in lookup_host(&remote_addr).await? {
let local = if let Some(addr) = local_addr {
addr
} else {
crate::config::Config::get_any_listen_addr(remote_addr.is_ipv4())
};
if let Ok(socket) = new_socket(local, true) {
if let Ok(Ok(stream)) =
super::timeout(ms_timeout, socket.connect(remote_addr)).await
{
stream.set_nodelay(true).ok();
let addr = stream.local_addr()?;
return Ok(Self(
Framed::new(DynTcpStream(Box::new(stream)), BytesCodec::new()),
addr,
None,
0,
));
}
}
}
bail!(format!("Failed to connect to {remote_addr}"));
}
pub async fn connect<'a, 't, P, T>(
proxy: P,
target: T,
local_addr: Option<SocketAddr>,
username: &'a str,
password: &'a str,
ms_timeout: u64,
) -> ResultType<Self>
where
P: ToProxyAddrs,
T: IntoTargetAddr<'t>,
{
if let Some(Ok(proxy)) = proxy.to_proxy_addrs().next().await {
let local = if let Some(addr) = local_addr {
addr
} else {
crate::config::Config::get_any_listen_addr(proxy.is_ipv4())
};
let stream =
super::timeout(ms_timeout, new_socket(local, true)?.connect(proxy)).await??;
stream.set_nodelay(true).ok();
let stream = if username.trim().is_empty() {
super::timeout(
ms_timeout,
Socks5Stream::connect_with_socket(stream, target),
)
.await??
} else {
super::timeout(
ms_timeout,
Socks5Stream::connect_with_password_and_socket(
stream, target, username, password,
),
)
.await??
};
let addr = stream.local_addr()?;
return Ok(Self(
Framed::new(DynTcpStream(Box::new(stream)), BytesCodec::new()),
addr,
None,
0,
));
}
bail!("could not resolve to any address");
}
pub fn local_addr(&self) -> SocketAddr {
self.1
}
pub fn set_send_timeout(&mut self, ms: u64) {
self.3 = ms;
}
pub fn from(stream: impl TcpStreamTrait + Send + Sync + 'static, addr: SocketAddr) -> Self {
Self(
Framed::new(DynTcpStream(Box::new(stream)), BytesCodec::new()),
addr,
None,
0,
)
}
pub fn set_raw(&mut self) {
self.0.codec_mut().set_raw();
self.2 = None;
}
pub fn is_secured(&self) -> bool {
self.2.is_some()
}
#[inline]
pub async fn send(&mut self, msg: &impl Message) -> ResultType<()> {
self.send_raw(msg.write_to_bytes()?).await
}
#[inline]
pub async fn send_raw(&mut self, msg: Vec<u8>) -> ResultType<()> {
let mut msg = msg;
if let Some(key) = self.2.as_mut() {
key.1 += 1;
let nonce = Self::get_nonce(key.1);
msg = secretbox::seal(&msg, &nonce, &key.0);
}
self.send_bytes(bytes::Bytes::from(msg)).await?;
Ok(())
}
#[inline]
pub async fn send_bytes(&mut self, bytes: Bytes) -> ResultType<()> {
if self.3 > 0 {
super::timeout(self.3, self.0.send(bytes)).await??;
} else {
self.0.send(bytes).await?;
}
Ok(())
}
#[inline]
pub async fn next(&mut self) -> Option<Result<BytesMut, Error>> {
let mut res = self.0.next().await;
if let Some(key) = self.2.as_mut() {
if let Some(Ok(bytes)) = res.as_mut() {
key.2 += 1;
let nonce = Self::get_nonce(key.2);
match secretbox::open(bytes, &nonce, &key.0) {
Ok(res) => {
bytes.clear();
bytes.put_slice(&res);
}
Err(()) => {
return Some(Err(Error::new(ErrorKind::Other, "decryption error")));
}
}
}
}
res
}
#[inline]
pub async fn next_timeout(&mut self, ms: u64) -> Option<Result<BytesMut, Error>> {
if let Ok(res) = super::timeout(ms, self.next()).await {
res
} else {
None
}
}
pub fn set_key(&mut self, key: Key) {
self.2 = Some((key, 0, 0));
}
fn get_nonce(seqnum: u64) -> Nonce {
let mut nonce = Nonce([0u8; secretbox::NONCEBYTES]);
nonce.0[..std::mem::size_of_val(&seqnum)].copy_from_slice(&seqnum.to_le_bytes());
nonce
}
}
const DEFAULT_BACKLOG: u32 = 128;
pub async fn new_listener<T: ToSocketAddrs>(addr: T, reuse: bool) -> ResultType<TcpListener> {
if !reuse {
Ok(TcpListener::bind(addr).await?)
} else {
let addr = lookup_host(&addr)
.await?
.next()
.context("could not resolve to any address")?;
new_socket(addr, true)?
.listen(DEFAULT_BACKLOG)
.map_err(anyhow::Error::msg)
}
}
pub async fn listen_any(port: u16) -> ResultType<TcpListener> {
if let Ok(mut socket) = TcpSocket::new_v6() {
#[cfg(unix)]
{
use std::os::unix::io::{FromRawFd, IntoRawFd};
let raw_fd = socket.into_raw_fd();
let sock2 = unsafe { socket2::Socket::from_raw_fd(raw_fd) };
sock2.set_only_v6(false).ok();
socket = unsafe { TcpSocket::from_raw_fd(sock2.into_raw_fd()) };
}
#[cfg(windows)]
{
use std::os::windows::prelude::{FromRawSocket, IntoRawSocket};
let raw_socket = socket.into_raw_socket();
let sock2 = unsafe { socket2::Socket::from_raw_socket(raw_socket) };
sock2.set_only_v6(false).ok();
socket = unsafe { TcpSocket::from_raw_socket(sock2.into_raw_socket()) };
}
if socket
.bind(SocketAddr::new(IpAddr::V6(Ipv6Addr::UNSPECIFIED), port))
.is_ok()
{
if let Ok(l) = socket.listen(DEFAULT_BACKLOG) {
return Ok(l);
}
}
}
let s = TcpSocket::new_v4()?;
s.bind(SocketAddr::new(IpAddr::V4(Ipv4Addr::UNSPECIFIED), port))?;
Ok(s.listen(DEFAULT_BACKLOG)?)
}
impl Unpin for DynTcpStream {}
impl AsyncRead for DynTcpStream {
fn poll_read(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &mut ReadBuf<'_>,
) -> Poll<io::Result<()>> {
AsyncRead::poll_read(Pin::new(&mut self.0), cx, buf)
}
}
impl AsyncWrite for DynTcpStream {
fn poll_write(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &[u8],
) -> Poll<io::Result<usize>> {
AsyncWrite::poll_write(Pin::new(&mut self.0), cx, buf)
}
fn poll_flush(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<io::Result<()>> {
AsyncWrite::poll_flush(Pin::new(&mut self.0), cx)
}
fn poll_shutdown(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<io::Result<()>> {
AsyncWrite::poll_shutdown(Pin::new(&mut self.0), cx)
}
}
impl<R: AsyncRead + AsyncWrite + Unpin> TcpStreamTrait for R {}

View File

@@ -1,170 +0,0 @@
use crate::ResultType;
use anyhow::{anyhow, Context};
use bytes::{Bytes, BytesMut};
use futures::{SinkExt, StreamExt};
use protobuf::Message;
use socket2::{Domain, Socket, Type};
use std::net::SocketAddr;
use tokio::net::{lookup_host, ToSocketAddrs, UdpSocket};
use tokio_socks::{udp::Socks5UdpFramed, IntoTargetAddr, TargetAddr, ToProxyAddrs};
use tokio_util::{codec::BytesCodec, udp::UdpFramed};
pub enum FramedSocket {
Direct(UdpFramed<BytesCodec>),
ProxySocks(Socks5UdpFramed),
}
fn new_socket(addr: SocketAddr, reuse: bool, buf_size: usize) -> Result<Socket, std::io::Error> {
let socket = match addr {
SocketAddr::V4(..) => Socket::new(Domain::ipv4(), Type::dgram(), None),
SocketAddr::V6(..) => Socket::new(Domain::ipv6(), Type::dgram(), None),
}?;
if reuse {
// windows has no reuse_port, but it's reuse_address
// almost equals to unix's reuse_port + reuse_address,
// though may introduce nondeterministic behavior
#[cfg(unix)]
socket.set_reuse_port(true)?;
socket.set_reuse_address(true)?;
}
// only nonblocking work with tokio, https://stackoverflow.com/questions/64649405/receiver-on-tokiompscchannel-only-receives-messages-when-buffer-is-full
socket.set_nonblocking(true)?;
if buf_size > 0 {
socket.set_recv_buffer_size(buf_size).ok();
}
log::info!(
"Receive buf size of udp {}: {:?}",
addr,
socket.recv_buffer_size()
);
if addr.is_ipv6() && addr.ip().is_unspecified() && addr.port() > 0 {
socket.set_only_v6(false).ok();
}
socket.bind(&addr.into())?;
Ok(socket)
}
impl FramedSocket {
pub async fn new<T: ToSocketAddrs>(addr: T) -> ResultType<Self> {
Self::new_reuse(addr, false, 0).await
}
pub async fn new_reuse<T: ToSocketAddrs>(
addr: T,
reuse: bool,
buf_size: usize,
) -> ResultType<Self> {
let addr = lookup_host(&addr)
.await?
.next()
.context("could not resolve to any address")?;
Ok(Self::Direct(UdpFramed::new(
UdpSocket::from_std(new_socket(addr, reuse, buf_size)?.into_udp_socket())?,
BytesCodec::new(),
)))
}
pub async fn new_proxy<'a, 't, P: ToProxyAddrs, T: ToSocketAddrs>(
proxy: P,
local: T,
username: &'a str,
password: &'a str,
ms_timeout: u64,
) -> ResultType<Self> {
let framed = if username.trim().is_empty() {
super::timeout(ms_timeout, Socks5UdpFramed::connect(proxy, Some(local))).await??
} else {
super::timeout(
ms_timeout,
Socks5UdpFramed::connect_with_password(proxy, Some(local), username, password),
)
.await??
};
log::trace!(
"Socks5 udp connected, local addr: {:?}, target addr: {}",
framed.local_addr(),
framed.socks_addr()
);
Ok(Self::ProxySocks(framed))
}
#[inline]
pub async fn send(
&mut self,
msg: &impl Message,
addr: impl IntoTargetAddr<'_>,
) -> ResultType<()> {
let addr = addr.into_target_addr()?.to_owned();
let send_data = Bytes::from(msg.write_to_bytes()?);
match self {
Self::Direct(f) => {
if let TargetAddr::Ip(addr) = addr {
f.send((send_data, addr)).await?
}
}
Self::ProxySocks(f) => f.send((send_data, addr)).await?,
};
Ok(())
}
// https://stackoverflow.com/a/68733302/1926020
#[inline]
pub async fn send_raw(
&mut self,
msg: &'static [u8],
addr: impl IntoTargetAddr<'static>,
) -> ResultType<()> {
let addr = addr.into_target_addr()?.to_owned();
match self {
Self::Direct(f) => {
if let TargetAddr::Ip(addr) = addr {
f.send((Bytes::from(msg), addr)).await?
}
}
Self::ProxySocks(f) => f.send((Bytes::from(msg), addr)).await?,
};
Ok(())
}
#[inline]
pub async fn next(&mut self) -> Option<ResultType<(BytesMut, TargetAddr<'static>)>> {
match self {
Self::Direct(f) => match f.next().await {
Some(Ok((data, addr))) => {
Some(Ok((data, addr.into_target_addr().ok()?.to_owned())))
}
Some(Err(e)) => Some(Err(anyhow!(e))),
None => None,
},
Self::ProxySocks(f) => match f.next().await {
Some(Ok((data, _))) => Some(Ok((data.data, data.dst_addr))),
Some(Err(e)) => Some(Err(anyhow!(e))),
None => None,
},
}
}
#[inline]
pub async fn next_timeout(
&mut self,
ms: u64,
) -> Option<ResultType<(BytesMut, TargetAddr<'static>)>> {
if let Ok(res) =
tokio::time::timeout(std::time::Duration::from_millis(ms), self.next()).await
{
res
} else {
None
}
}
pub fn local_addr(&self) -> Option<SocketAddr> {
if let FramedSocket::Direct(x) = self {
if let Ok(v) = x.get_ref().local_addr() {
return Some(v);
}
}
None
}
}

65
rcd/rustdesk-hbbr Normal file
View File

@@ -0,0 +1,65 @@
#!/bin/sh
# PROVIDE: rustdesk_hbbr
# REQUIRE: LOGIN
# KEYWORD: shutdown
#
# Add the following lines to /etc/rc.conf.local or /etc/rc.conf
# to enable this service:
#
# rustdesk_hbbr_enable (bool): Set to NO by default.
# Set it to YES to enable rustdesk_hbbr.
# rustdesk_hbbr_args (string): Set extra arguments to pass to rustdesk_hbbr
# Default is "-k _".
# rustdesk_hbbr_user (string): Set user that rustdesk_hbbr will run under
# Default is "root".
# rustdesk_hbbr_group (string): Set group that rustdesk_hbbr will run under
# Default is "wheel".
. /etc/rc.subr
name=rustdesk_hbbr
desc="Rustdesk Relay Server"
rcvar=rustdesk_hbbr_enable
load_rc_config $name
: ${rustdesk_hbbr_enable:=NO}
: ${rustdesk_hbbr_args="-k _"}
: ${rustdesk_hbbr_user:=rustdesk}
: ${rustdesk_hbbr_group:=rustdesk}
pidfile=/var/run/rustdesk_hbbr.pid
command=/usr/sbin/daemon
procname=/usr/local/sbin/hbbr
rustdesk_hbbr_chdir=/var/db/rustdesk-server
command_args="-p ${pidfile} -o /var/log/rustdesk-hbbr.log ${procname} ${rustdesk_hbbr_args}"
## If you want the daemon do its log over syslog comment out the above line and remove the comment from the below replacement
#command_args="-p ${pidfile} -T ${name} ${procname} ${rustdesk_hbbr_args}"
start_precmd=rustdesk_hbbr_startprecmd
rustdesk_hbbr_startprecmd()
{
if [ -e ${pidfile} ]; then
chown ${rustdesk_hbbr_user}:${rustdesk_hbbr_group} ${pidfile};
else
install -o ${rustdesk_hbbr_user} -g ${rustdesk_hbbr_group} /dev/null ${pidfile};
fi
if [ -e ${rustdesk_hbbr_chdir} ]; then
chown -R ${rustdesk_hbbr_user}:${rustdesk_hbbr_group} ${rustdesk_hbbr_chdir};
chmod -R 770 ${rustdesk_hbbr_chdir};
else
mkdir -m 770 ${rustdesk_hbbr_chdir};
chown ${rustdesk_hbbr_user}:${rustdesk_hbbr_group} ${rustdesk_hbbr_chdir};
fi
if [ -e /var/log/rustdesk-hbbr.log ]; then
chown -R ${rustdesk_hbbr_user}:${rustdesk_hbbr_group} /var/log/rustdesk-hbbr.log;
chmod 660 /var/log/rustdesk-hbbr.log;
else
install -o ${rustdesk_hbbr_user} -g ${rustdesk_hbbr_group} /dev/null /var/log/rustdesk-hbbr.log;
chmod 660 /var/log/rustdesk-hbbr.log;
fi
}
run_rc_command "$1"

68
rcd/rustdesk-hbbs Normal file
View File

@@ -0,0 +1,68 @@
#!/bin/sh
# PROVIDE: rustdesk_hbbs
# REQUIRE: LOGIN
# KEYWORD: shutdown
#
# Add the following lines to /etc/rc.conf.local or /etc/rc.conf
# to enable this service:
#
# rustdesk_hbbs_enable (bool): Set to NO by default.
# Set it to YES to enable rustdesk_hbbs.
# rustdesk_hbbs_ip (string): Set IP address/hostname of relay server to use
# Defaults to "127.0.0.1", please replace with your server hostname/IP.
# rustdesk_hbbs_args (string): Set extra arguments to pass to rustdesk_hbbs
# Default is "-r ${rustdesk_hbbs_ip} -k _".
# rustdesk_hbbs_user (string): Set user that rustdesk_hbbs will run under
# Default is "root".
# rustdesk_hbbs_group (string): Set group that rustdesk_hbbs will run under
# Default is "wheel".
. /etc/rc.subr
name=rustdesk_hbbs
desc="Rustdesk ID/Rendezvous Server"
rcvar=rustdesk_hbbs_enable
load_rc_config $name
: ${rustdesk_hbbs_enable:=NO}
: ${rustdesk_hbbs_ip:=127.0.0.1}
: ${rustdesk_hbbs_args="-r ${rustdesk_hbbs_ip} -k _"}
: ${rustdesk_hbbs_user:=rustdesk}
: ${rustdesk_hbbs_group:=rustdesk}
pidfile=/var/run/rustdesk_hbbs.pid
command=/usr/sbin/daemon
procname=/usr/local/sbin/hbbs
rustdesk_hbbs_chdir=/var/db/rustdesk-server
command_args="-p ${pidfile} -o /var/log/rustdesk-hbbs.log ${procname} ${rustdesk_hbbs_args}"
## If you want the daemon to do its log over syslog, comment out the above line and remove the comment from the below replacement
#command_args="-p ${pidfile} -T ${name} ${procname} ${rustdesk_hbbs_args}"
start_precmd=rustdesk_hbbs_startprecmd
rustdesk_hbbs_startprecmd()
{
if [ -e ${pidfile} ]; then
chown ${rustdesk_hbbs_user}:${rustdesk_hbbs_group} ${pidfile};
else
install -o ${rustdesk_hbbs_user} -g ${rustdesk_hbbs_group} /dev/null ${pidfile};
fi
if [ -e ${rustdesk_hbbs_chdir} ]; then
chown -R ${rustdesk_hbbs_user}:${rustdesk_hbbs_group} ${rustdesk_hbbs_chdir};
chmod -R 770 ${rustdesk_hbbs_chdir};
else
mkdir -m 770 ${rustdesk_hbbs_chdir};
chown ${rustdesk_hbbs_user}:${rustdesk_hbbs_group} ${rustdesk_hbbs_chdir};
fi
if [ -e /var/log/rustdesk-hbbs.log ]; then
chown -R ${rustdesk_hbbs_user}:${rustdesk_hbbs_group} /var/log/rustdesk-hbbs.log;
chmod 660 /var/log/rustdesk-hbbs.log;
else
install -o ${rustdesk_hbbs_user} -g ${rustdesk_hbbs_group} /dev/null /var/log/rustdesk-hbbs.log;
chmod 660 /var/log/rustdesk-hbbs.log;
fi
}
run_rc_command "$1"

View File

@@ -1,7 +1,6 @@
use clap::App;
use hbb_common::{
anyhow::{Context, Result},
log, ResultType,
allow_err, anyhow::{Context, Result}, get_version_number, log, tokio, ResultType
};
use ini::Ini;
use sodiumoxide::crypto::sign;
@@ -78,7 +77,7 @@ pub fn init_args(args: &str, name: &str, about: &str) {
}
}
for (k, v) in matches.args {
if let Some(v) = v.vals.get(0) {
if let Some(v) = v.vals.first() {
std::env::set_var(arg_name(k), v.to_string_lossy().to_string());
}
}
@@ -113,13 +112,18 @@ pub fn gen_sk(wait: u64) -> (String, Option<sign::SecretKey>) {
if let Ok(mut file) = std::fs::File::open(sk_file) {
let mut contents = String::new();
if file.read_to_string(&mut contents).is_ok() {
let sk = base64::decode(&contents).unwrap_or_default();
let contents = contents.trim();
let sk = base64::decode(contents).unwrap_or_default();
if sk.len() == sign::SECRETKEYBYTES {
let mut tmp = [0u8; sign::SECRETKEYBYTES];
tmp[..].copy_from_slice(&sk);
let pk = base64::encode(&tmp[sign::SECRETKEYBYTES / 2..]);
log::info!("Private key comes from {}", sk_file);
return (pk, Some(sign::SecretKey(tmp)));
} else {
// don't use log here, since it is async
println!("Fatal error: malformed private key in {sk_file}.");
std::process::exit(1);
}
}
} else {
@@ -156,8 +160,6 @@ pub async fn listen_signal() -> Result<()> {
use hbb_common::tokio::signal::unix::{signal, SignalKind};
tokio::spawn(async {
let mut s = signal(SignalKind::hangup())?;
let hangup = s.recv();
let mut s = signal(SignalKind::terminate())?;
let terminate = s.recv();
let mut s = signal(SignalKind::interrupt())?;
@@ -166,9 +168,6 @@ pub async fn listen_signal() -> Result<()> {
let quit = s.recv();
tokio::select! {
_ = hangup => {
log::info!("signal hangup");
}
_ = terminate => {
log::info!("signal terminate");
}
@@ -189,3 +188,31 @@ pub async fn listen_signal() -> Result<()> {
let () = std::future::pending().await;
unreachable!();
}
pub fn check_software_update() {
const ONE_DAY_IN_SECONDS: u64 = 60 * 60 * 24;
std::thread::spawn(move || loop {
std::thread::spawn(move || allow_err!(check_software_update_()));
std::thread::sleep(std::time::Duration::from_secs(ONE_DAY_IN_SECONDS));
});
}
#[tokio::main(flavor = "current_thread")]
async fn check_software_update_() -> hbb_common::ResultType<()> {
let (request, url) = hbb_common::version_check_request(hbb_common::VER_TYPE_RUSTDESK_SERVER.to_string());
let latest_release_response = reqwest::Client::builder().build()?
.post(url)
.json(&request)
.send()
.await?;
let bytes = latest_release_response.bytes().await?;
let resp: hbb_common::VersionCheckResponse = serde_json::from_slice(&bytes)?;
let response_url = resp.url;
let latest_release_version = response_url.rsplit('/').next().unwrap_or_default();
if get_version_number(&latest_release_version) > get_version_number(crate::version::VERSION) {
log::info!("new version is available: {}", latest_release_version);
}
Ok(())
}

View File

@@ -17,9 +17,9 @@ fn main() -> ResultType<()> {
"-c --config=[FILE] +takes_value 'Sets a custom config file'
-p, --port=[NUMBER(default={RENDEZVOUS_PORT})] 'Sets the listening port'
-s, --serial=[NUMBER(default=0)] 'Sets configure update serial number'
-R, --rendezvous-servers=[HOSTS] 'Sets rendezvous servers, seperated by colon'
-R, --rendezvous-servers=[HOSTS] 'Sets rendezvous servers, separated by comma'
-u, --software-url=[URL] 'Sets download url of RustDesk software of newest version'
-r, --relay-servers=[HOST] 'Sets the default relay servers, seperated by colon'
-r, --relay-servers=[HOST] 'Sets the default relay servers, separated by comma'
-M, --rmem=[NUMBER(default={RMEM})] 'Sets UDP recv buffer size, set system rmem_max first, e.g., sudo sysctl -w net.core.rmem_max=52428800. vi /etc/sysctl.conf, net.core.rmem_max=52428800, sudo sysctl p'
, --mask=[MASK] 'Determine if the connection comes from LAN, e.g. 192.168.0.0/16'
-k, --key=[KEY] 'Only allow the client with the same key'",
@@ -31,6 +31,7 @@ fn main() -> ResultType<()> {
}
let rmem = get_arg("rmem").parse::<usize>().unwrap_or(RMEM);
let serial: i32 = get_arg("serial").parse().unwrap_or(0);
RendezvousServer::start(port, serial, &get_arg("key"), rmem)?;
crate::common::check_software_update();
RendezvousServer::start(port, serial, &get_arg_or("key", "-".to_owned()), rmem)?;
Ok(())
}

View File

@@ -18,10 +18,10 @@ lazy_static::lazy_static! {
pub(crate) static ref USER_STATUS: RwLock<UserStatusMap> = Default::default();
pub(crate) static ref IP_CHANGES: Mutex<IpChangesMap> = Default::default();
}
pub static IP_CHANGE_DUR: u64 = 180;
pub static IP_CHANGE_DUR_X2: u64 = IP_CHANGE_DUR * 2;
pub static DAY_SECONDS: u64 = 3600 * 24;
pub static IP_BLOCK_DUR: u64 = 60;
pub const IP_CHANGE_DUR: u64 = 180;
pub const IP_CHANGE_DUR_X2: u64 = IP_CHANGE_DUR * 2;
pub const DAY_SECONDS: u64 = 3600 * 24;
pub const IP_BLOCK_DUR: u64 = 60;
#[derive(Debug, Default, Serialize, Deserialize, Clone)]
pub(crate) struct PeerInfo {

View File

@@ -25,6 +25,7 @@ use std::{
io::prelude::*,
io::Error,
net::SocketAddr,
sync::atomic::{AtomicUsize, Ordering},
};
type Usage = (usize, usize, usize, usize);
@@ -36,11 +37,11 @@ lazy_static::lazy_static! {
static ref BLOCKLIST: RwLock<HashSet<String>> = Default::default();
}
static mut DOWNGRADE_THRESHOLD: f64 = 0.66;
static mut DOWNGRADE_START_CHECK: usize = 1_800_000; // in ms
static mut LIMIT_SPEED: usize = 4 * 1024 * 1024; // in bit/s
static mut TOTAL_BANDWIDTH: usize = 1024 * 1024 * 1024; // in bit/s
static mut SINGLE_BANDWIDTH: usize = 16 * 1024 * 1024; // in bit/s
static DOWNGRADE_THRESHOLD_100: AtomicUsize = AtomicUsize::new(66); // 0.66
static DOWNGRADE_START_CHECK: AtomicUsize = AtomicUsize::new(1_800_000); // in ms
static LIMIT_SPEED: AtomicUsize = AtomicUsize::new(32 * 1024 * 1024); // in bit/s
static TOTAL_BANDWIDTH: AtomicUsize = AtomicUsize::new(1024 * 1024 * 1024); // in bit/s
static SINGLE_BANDWIDTH: AtomicUsize = AtomicUsize::new(128 * 1024 * 1024); // in bit/s
const BLACKLIST_FILE: &str = "blacklist.txt";
const BLOCKLIST_FILE: &str = "blocklist.txt";
@@ -99,57 +100,53 @@ fn check_params() {
.map(|x| x.parse::<f64>().unwrap_or(0.))
.unwrap_or(0.);
if tmp > 0. {
unsafe {
DOWNGRADE_THRESHOLD = tmp;
}
DOWNGRADE_THRESHOLD_100.store((tmp * 100.) as _, Ordering::SeqCst);
}
unsafe { log::info!("DOWNGRADE_THRESHOLD: {}", DOWNGRADE_THRESHOLD) };
log::info!(
"DOWNGRADE_THRESHOLD: {}",
DOWNGRADE_THRESHOLD_100.load(Ordering::SeqCst) as f64 / 100.
);
let tmp = std::env::var("DOWNGRADE_START_CHECK")
.map(|x| x.parse::<usize>().unwrap_or(0))
.unwrap_or(0);
if tmp > 0 {
unsafe {
DOWNGRADE_START_CHECK = tmp * 1000;
}
DOWNGRADE_START_CHECK.store(tmp * 1000, Ordering::SeqCst);
}
unsafe { log::info!("DOWNGRADE_START_CHECK: {}s", DOWNGRADE_START_CHECK / 1000) };
log::info!(
"DOWNGRADE_START_CHECK: {}s",
DOWNGRADE_START_CHECK.load(Ordering::SeqCst) / 1000
);
let tmp = std::env::var("LIMIT_SPEED")
.map(|x| x.parse::<f64>().unwrap_or(0.))
.unwrap_or(0.);
if tmp > 0. {
unsafe {
LIMIT_SPEED = (tmp * 1024. * 1024.) as usize;
}
LIMIT_SPEED.store((tmp * 1024. * 1024.) as usize, Ordering::SeqCst);
}
unsafe { log::info!("LIMIT_SPEED: {}Mb/s", LIMIT_SPEED as f64 / 1024. / 1024.) };
log::info!(
"LIMIT_SPEED: {}Mb/s",
LIMIT_SPEED.load(Ordering::SeqCst) as f64 / 1024. / 1024.
);
let tmp = std::env::var("TOTAL_BANDWIDTH")
.map(|x| x.parse::<f64>().unwrap_or(0.))
.unwrap_or(0.);
if tmp > 0. {
unsafe {
TOTAL_BANDWIDTH = (tmp * 1024. * 1024.) as usize;
}
TOTAL_BANDWIDTH.store((tmp * 1024. * 1024.) as usize, Ordering::SeqCst);
}
unsafe {
log::info!(
"TOTAL_BANDWIDTH: {}Mb/s",
TOTAL_BANDWIDTH as f64 / 1024. / 1024.
)
};
log::info!(
"TOTAL_BANDWIDTH: {}Mb/s",
TOTAL_BANDWIDTH.load(Ordering::SeqCst) as f64 / 1024. / 1024.
);
let tmp = std::env::var("SINGLE_BANDWIDTH")
.map(|x| x.parse::<f64>().unwrap_or(0.))
.unwrap_or(0.);
if tmp > 0. {
unsafe {
SINGLE_BANDWIDTH = (tmp * 1024. * 1024.) as usize;
}
SINGLE_BANDWIDTH.store((tmp * 1024. * 1024.) as usize, Ordering::SeqCst);
}
unsafe {
log::info!(
"SINGLE_BANDWIDTH: {}Mb/s",
SINGLE_BANDWIDTH as f64 / 1024. / 1024.
)
};
log::info!(
"SINGLE_BANDWIDTH: {}Mb/s",
SINGLE_BANDWIDTH.load(Ordering::SeqCst) as f64 / 1024. / 1024.
)
}
async fn check_cmd(cmd: &str, limiter: Limiter) -> String {
@@ -233,76 +230,68 @@ async fn check_cmd(cmd: &str, limiter: Limiter) -> String {
if let Some(v) = fds.next() {
if let Ok(v) = v.parse::<f64>() {
if v > 0. {
unsafe {
DOWNGRADE_THRESHOLD = v;
}
DOWNGRADE_THRESHOLD_100.store((v * 100.) as _, Ordering::SeqCst);
}
}
} else {
unsafe {
res = format!("{DOWNGRADE_THRESHOLD}\n");
}
res = format!(
"{}\n",
DOWNGRADE_THRESHOLD_100.load(Ordering::SeqCst) as f64 / 100.
);
}
}
Some("downgrade-start-check" | "t") => {
if let Some(v) = fds.next() {
if let Ok(v) = v.parse::<usize>() {
if v > 0 {
unsafe {
DOWNGRADE_START_CHECK = v * 1000;
}
DOWNGRADE_START_CHECK.store(v * 1000, Ordering::SeqCst);
}
}
} else {
unsafe {
res = format!("{}s\n", DOWNGRADE_START_CHECK / 1000);
}
res = format!("{}s\n", DOWNGRADE_START_CHECK.load(Ordering::SeqCst) / 1000);
}
}
Some("limit-speed" | "ls") => {
if let Some(v) = fds.next() {
if let Ok(v) = v.parse::<f64>() {
if v > 0. {
unsafe {
LIMIT_SPEED = (v * 1024. * 1024.) as _;
}
LIMIT_SPEED.store((v * 1024. * 1024.) as _, Ordering::SeqCst);
}
}
} else {
unsafe {
res = format!("{}Mb/s\n", LIMIT_SPEED as f64 / 1024. / 1024.);
}
res = format!(
"{}Mb/s\n",
LIMIT_SPEED.load(Ordering::SeqCst) as f64 / 1024. / 1024.
);
}
}
Some("total-bandwidth" | "tb") => {
if let Some(v) = fds.next() {
if let Ok(v) = v.parse::<f64>() {
if v > 0. {
unsafe {
TOTAL_BANDWIDTH = (v * 1024. * 1024.) as _;
limiter.set_speed_limit(TOTAL_BANDWIDTH as _);
}
TOTAL_BANDWIDTH.store((v * 1024. * 1024.) as _, Ordering::SeqCst);
limiter.set_speed_limit(TOTAL_BANDWIDTH.load(Ordering::SeqCst) as _);
}
}
} else {
unsafe {
res = format!("{}Mb/s\n", TOTAL_BANDWIDTH as f64 / 1024. / 1024.);
}
res = format!(
"{}Mb/s\n",
TOTAL_BANDWIDTH.load(Ordering::SeqCst) as f64 / 1024. / 1024.
);
}
}
Some("single-bandwidth" | "sb") => {
if let Some(v) = fds.next() {
if let Ok(v) = v.parse::<f64>() {
if v > 0. {
unsafe {
SINGLE_BANDWIDTH = (v * 1024. * 1024.) as _;
}
SINGLE_BANDWIDTH.store((v * 1024. * 1024.) as _, Ordering::SeqCst);
}
}
} else {
unsafe {
res = format!("{}Mb/s\n", SINGLE_BANDWIDTH as f64 / 1024. / 1024.);
}
res = format!(
"{}Mb/s\n",
SINGLE_BANDWIDTH.load(Ordering::SeqCst) as f64 / 1024. / 1024.
);
}
}
Some("usage" | "u") => {
@@ -336,7 +325,7 @@ async fn check_cmd(cmd: &str, limiter: Limiter) -> String {
async fn io_loop(listener: TcpListener, listener2: TcpListener, key: &str) {
check_params();
let limiter = <Limiter>::new(unsafe { TOTAL_BANDWIDTH as _ });
let limiter = <Limiter>::new(TOTAL_BANDWIDTH.load(Ordering::SeqCst) as _);
loop {
tokio::select! {
res = listener.accept() => {
@@ -374,12 +363,12 @@ async fn handle_connection(
key: &str,
ws: bool,
) {
let ip = addr.ip().to_string();
if !ws && ip == "127.0.0.1" {
let ip = hbb_common::try_into_v4(addr).ip();
if !ws && ip.is_loopback() {
let limiter = limiter.clone();
tokio::spawn(async move {
let mut stream = stream;
let mut buffer = [0; 64];
let mut buffer = [0; 1024];
if let Ok(Ok(n)) = timeout(1000, stream.read(&mut buffer[..])).await {
if let Ok(data) = std::str::from_utf8(&buffer[..n]) {
let res = check_cmd(data, limiter).await;
@@ -389,6 +378,7 @@ async fn handle_connection(
});
return;
}
let ip = ip.to_string();
if BLOCKLIST.read().await.get(&ip).is_some() {
log::info!("{} blocked", ip);
return;
@@ -402,19 +392,30 @@ async fn handle_connection(
async fn make_pair(
stream: TcpStream,
addr: SocketAddr,
mut addr: SocketAddr,
key: &str,
limiter: Limiter,
ws: bool,
) -> ResultType<()> {
if ws {
make_pair_(
tokio_tungstenite::accept_async(stream).await?,
addr,
key,
limiter,
)
.await;
use tokio_tungstenite::tungstenite::handshake::server::{Request, Response};
let callback = |req: &Request, response: Response| {
let headers = req.headers();
let real_ip = headers
.get("X-Real-IP")
.or_else(|| headers.get("X-Forwarded-For"))
.and_then(|header_value| header_value.to_str().ok());
if let Some(ip) = real_ip {
if ip.contains('.') {
addr = format!("{ip}:0").parse().unwrap_or(addr);
} else {
addr = format!("[{ip}]:0").parse().unwrap_or(addr);
}
}
Ok(response)
};
let ws_stream = tokio_tungstenite::accept_hdr_async(stream, callback).await?;
make_pair_(ws_stream, addr, key, limiter).await;
} else {
make_pair_(FramedStream::from(stream, addr), addr, key, limiter).await;
}
@@ -427,6 +428,7 @@ async fn make_pair_(stream: impl StreamTrait, addr: SocketAddr, key: &str, limit
if let Ok(msg_in) = RendezvousMessage::parse_from_bytes(&bytes) {
if let Some(rendezvous_message::Union::RequestRelay(rf)) = msg_in.union {
if !key.is_empty() && rf.licence_key != key {
log::warn!("Relay authentication failed from {} - invalid key", addr);
return;
}
if !rf.uuid.is_empty() {
@@ -474,10 +476,11 @@ async fn relay(
let mut highest_s = 0;
let mut downgrade: bool = false;
let mut blacked: bool = false;
let limiter = <Limiter>::new(unsafe { SINGLE_BANDWIDTH as _ });
let blacklist_limiter = <Limiter>::new(unsafe { LIMIT_SPEED as _ });
let sb = SINGLE_BANDWIDTH.load(Ordering::SeqCst) as f64;
let limiter = <Limiter>::new(sb);
let blacklist_limiter = <Limiter>::new(LIMIT_SPEED.load(Ordering::SeqCst) as _);
let downgrade_threshold =
(unsafe { SINGLE_BANDWIDTH as f64 * DOWNGRADE_THRESHOLD } / 1000.) as usize; // in bit/ms
(sb * DOWNGRADE_THRESHOLD_100.load(Ordering::SeqCst) as f64 / 100. / 1000.) as usize; // in bit/ms
let mut timer = interval(Duration::from_secs(3));
let mut last_recv_time = std::time::Instant::now();
loop {
@@ -545,7 +548,7 @@ async fn relay(
(elapsed as _, total as _, highest_s as _, speed as _),
);
total_s = 0;
if elapsed > unsafe { DOWNGRADE_START_CHECK }
if elapsed > DOWNGRADE_START_CHECK.load(Ordering::SeqCst)
&& !downgrade
&& total > elapsed * downgrade_threshold
{

View File

@@ -1,7 +1,7 @@
use crate::common::*;
use crate::peer::*;
use hbb_common::{
allow_err,
allow_err, bail,
bytes::{Bytes, BytesMut},
bytes_codec::BytesCodec,
config,
@@ -35,10 +35,10 @@ use sodiumoxide::crypto::sign;
use std::{
collections::HashMap,
net::{IpAddr, Ipv4Addr, Ipv6Addr, SocketAddr},
sync::atomic::{AtomicBool, AtomicUsize, Ordering},
sync::Arc,
time::Instant,
};
const ADDR_127: IpAddr = IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1));
#[derive(Clone, Debug)]
enum Data {
@@ -56,10 +56,18 @@ enum Sink {
}
type Sender = mpsc::UnboundedSender<Data>;
type Receiver = mpsc::UnboundedReceiver<Data>;
static mut ROTATION_RELAY_SERVER: usize = 0;
static ROTATION_RELAY_SERVER: AtomicUsize = AtomicUsize::new(0);
type RelayServers = Vec<String>;
static CHECK_RELAY_TIMEOUT: u64 = 3_000;
static mut ALWAYS_USE_RELAY: bool = false;
const CHECK_RELAY_TIMEOUT: u64 = 3_000;
static ALWAYS_USE_RELAY: AtomicBool = AtomicBool::new(false);
// Store punch hole requests
use once_cell::sync::Lazy;
use tokio::sync::Mutex as TokioMutex; // differentiate if needed
#[derive(Clone)]
struct PunchReqEntry { tm: Instant, from_ip: String, to_ip: String, to_id: String }
static PUNCH_REQS: Lazy<TokioMutex<Vec<PunchReqEntry>>> = Lazy::new(|| TokioMutex::new(Vec::new()));
const PUNCH_REQ_DEDUPE_SEC: u64 = 60;
#[derive(Clone)]
struct Inner {
@@ -93,7 +101,6 @@ impl RendezvousServer {
#[tokio::main(flavor = "multi_thread")]
pub async fn start(port: i32, serial: i32, key: &str, rmem: usize) -> ResultType<()> {
let (key, sk) = Self::get_server_sk(key);
let addr = format!("0.0.0.0:{port}");
let nat_port = port - 1;
let ws_port = port + 2;
let pm = PeerMap::new().await?;
@@ -149,27 +156,36 @@ impl RendezvousServer {
.to_uppercase()
== "Y"
{
unsafe {
ALWAYS_USE_RELAY = true;
}
ALWAYS_USE_RELAY.store(true, Ordering::SeqCst);
}
log::info!(
"ALWAYS_USE_RELAY={}",
if unsafe { ALWAYS_USE_RELAY } {
if ALWAYS_USE_RELAY.load(Ordering::SeqCst) {
"Y"
} else {
"N"
}
);
if test_addr.to_lowercase() != "no" {
let test_addr = (if test_addr.is_empty() {
addr.replace("0.0.0.0", "127.0.0.1")
let test_addr = if test_addr.is_empty() {
listener.local_addr()?
} else {
test_addr
})
.parse::<SocketAddr>()?;
test_addr.parse()?
};
tokio::spawn(async move {
allow_err!(test_hbbs(test_addr).await);
if let Err(err) = test_hbbs(test_addr).await {
if test_addr.is_ipv6() && test_addr.ip().is_unspecified() {
let mut test_addr = test_addr;
test_addr.set_ip(IpAddr::V4(Ipv4Addr::UNSPECIFIED));
if let Err(err) = test_hbbs(test_addr).await {
log::error!("Failed to run hbbs test with {test_addr}: {err}");
std::process::exit(1);
}
} else {
log::error!("Failed to run hbbs test with {test_addr}: {err}");
std::process::exit(1);
}
}
});
};
let main_task = async move {
@@ -427,7 +443,7 @@ impl RendezvousServer {
self.handle_local_addr(la, addr, Some(socket)).await?;
}
Some(rendezvous_message::Union::ConfigureUpdate(mut cu)) => {
if addr.ip() == ADDR_127 && cu.serial > self.inner.serial {
if try_into_v4(addr).ip().is_loopback() && cu.serial > self.inner.serial {
let mut inner: Inner = (*self.inner).clone();
inner.serial = cu.serial;
self.inner = Arc::new(inner);
@@ -566,7 +582,7 @@ impl RendezvousServer {
ip != old.socket_addr.ip()
} else {
ip.to_string() != old.info.ip
} && ip != ADDR_127;
} && !ip.is_loopback();
let request_pk = old.pk.is_empty() || ip_change;
if !request_pk {
old.socket_addr = socket_addr;
@@ -672,6 +688,7 @@ impl RendezvousServer {
) -> ResultType<(RendezvousMessage, Option<SocketAddr>)> {
let mut ph = ph;
if !key.is_empty() && ph.licence_key != key {
log::warn!("Authentication failed from {} for peer {} - invalid key", addr, ph.id);
let mut msg_out = RendezvousMessage::new();
msg_out.set_punch_hole_response(PunchHoleResponse {
failure: punch_hole_response::Failure::LICENSE_MISMATCH.into(),
@@ -698,28 +715,42 @@ impl RendezvousServer {
});
return Ok((msg_out, None));
}
// record punch hole request (from addr -> peer id/peer_addr)
{
let from_ip = try_into_v4(addr).ip().to_string();
let to_ip = try_into_v4(peer_addr).ip().to_string();
let to_id_clone = id.clone();
let mut lock = PUNCH_REQS.lock().await;
let mut dup = false;
for e in lock.iter().rev().take(30) { // only check recent tail subset for speed
if e.from_ip == from_ip && e.to_id == to_id_clone {
if e.tm.elapsed().as_secs() < PUNCH_REQ_DEDUPE_SEC { dup = true; }
break;
}
}
if !dup { lock.push(PunchReqEntry { tm: Instant::now(), from_ip, to_ip, to_id: to_id_clone }); }
}
let mut msg_out = RendezvousMessage::new();
let peer_is_lan = self.is_lan(peer_addr);
let is_lan = self.is_lan(addr);
let mut relay_server = self.get_relay_server(addr.ip(), peer_addr.ip());
if unsafe { ALWAYS_USE_RELAY } || (peer_is_lan ^ is_lan) {
if ALWAYS_USE_RELAY.load(Ordering::SeqCst) || (peer_is_lan ^ is_lan) {
if peer_is_lan {
// https://github.com/rustdesk/rustdesk-server/issues/24
relay_server = self.inner.local_ip.clone()
}
ph.nat_type = NatType::SYMMETRIC.into(); // will force relay
}
let same_intranet = !ws
&& match peer_addr {
SocketAddr::V4(a) => match addr {
SocketAddr::V4(b) => a.ip() == b.ip(),
let same_intranet: bool = !ws
&& (peer_is_lan && is_lan || {
match (peer_addr, addr) {
(SocketAddr::V4(a), SocketAddr::V4(b)) => a.ip() == b.ip(),
(SocketAddr::V6(a), SocketAddr::V6(b)) => a.ip() == b.ip(),
_ => false,
},
SocketAddr::V6(a) => match addr {
SocketAddr::V6(b) => a.ip() == b.ip(),
_ => false,
},
};
}
});
let socket_addr = AddrMangle::encode(addr).into();
if same_intranet {
log::debug!(
@@ -899,10 +930,7 @@ impl RendezvousServer {
} else if self.relay_servers.len() == 1 {
return self.relay_servers[0].clone();
}
let i = unsafe {
ROTATION_RELAY_SERVER += 1;
ROTATION_RELAY_SERVER % self.relay_servers.len()
};
let i = ROTATION_RELAY_SERVER.fetch_add(1, Ordering::SeqCst) % self.relay_servers.len();
self.relay_servers[i].clone()
}
@@ -914,11 +942,12 @@ impl RendezvousServer {
match fds.next() {
Some("h") => {
res = format!(
"{}\n{}\n{}\n{}\n{}\n{}\n",
"{}\n{}\n{}\n{}\n{}\n{}\n{}\n",
"relay-servers(rs) <separated by ,>",
"reload-geo(rg)",
"ip-blocker(ib) [<ip>|<number>] [-]",
"ip-changes(ic) [<id>|<number>] [-]",
"punch-requests(pr) [<number>] [-]",
"always-use-relay(aur)",
"test-geo(tg) <ip1> <ip2>"
)
@@ -1018,16 +1047,38 @@ impl RendezvousServer {
}
}
}
Some("punch-requests" | "pr") => {
use std::fmt::Write as _;
let mut lock = PUNCH_REQS.lock().await;
let arg = fds.next();
if let Some("-") = arg { lock.clear(); }
else {
let mut start = arg.and_then(|x| x.parse::<usize>().ok()).unwrap_or(0);
let mut page_size = fds.next().and_then(|x| x.parse::<usize>().ok()).unwrap_or(10);
if page_size == 0 { page_size = 10; }
for (_, e) in lock.iter().enumerate().skip(start).take(page_size) {
let age = e.tm.elapsed();
let event_system = std::time::SystemTime::now() - age;
let event_iso = chrono::DateTime::<chrono::Utc>::from(event_system)
.to_rfc3339_opts(chrono::SecondsFormat::Secs, true);
let _ = writeln!(res, "{} {} -> {}@{}", event_iso, e.from_ip, e.to_id, e.to_ip);
}
}
}
Some("always-use-relay" | "aur") => {
if let Some(rs) = fds.next() {
if rs.to_uppercase() == "Y" {
unsafe { ALWAYS_USE_RELAY = true };
ALWAYS_USE_RELAY.store(true, Ordering::SeqCst);
} else {
unsafe { ALWAYS_USE_RELAY = false };
ALWAYS_USE_RELAY.store(false, Ordering::SeqCst);
}
self.tx.send(Data::RelayServers0(rs.to_owned())).ok();
} else {
let _ = writeln!(res, "ALWAYS_USE_RELAY: {:?}", unsafe { ALWAYS_USE_RELAY });
let _ = writeln!(
res,
"ALWAYS_USE_RELAY: {:?}",
ALWAYS_USE_RELAY.load(Ordering::SeqCst)
);
}
}
Some("test-geo" | "tg") => {
@@ -1050,10 +1101,11 @@ impl RendezvousServer {
async fn handle_listener2(&self, stream: TcpStream, addr: SocketAddr) {
let mut rs = self.clone();
if addr.ip().is_loopback() {
let ip = try_into_v4(addr).ip();
if ip.is_loopback() {
tokio::spawn(async move {
let mut stream = stream;
let mut buffer = [0; 64];
let mut buffer = [0; 1024];
if let Ok(Ok(n)) = timeout(1000, stream.read(&mut buffer[..])).await {
if let Ok(data) = std::str::from_utf8(&buffer[..n]) {
let res = rs.check_cmd(data).await;
@@ -1100,13 +1152,29 @@ impl RendezvousServer {
async fn handle_listener_inner(
&mut self,
stream: TcpStream,
addr: SocketAddr,
mut addr: SocketAddr,
key: &str,
ws: bool,
) -> ResultType<()> {
let mut sink;
if ws {
let ws_stream = tokio_tungstenite::accept_async(stream).await?;
use tokio_tungstenite::tungstenite::handshake::server::{Request, Response};
let callback = |req: &Request, response: Response| {
let headers = req.headers();
let real_ip = headers
.get("X-Real-IP")
.or_else(|| headers.get("X-Forwarded-For"))
.and_then(|header_value| header_value.to_str().ok());
if let Some(ip) = real_ip {
if ip.contains('.') {
addr = format!("{ip}:0").parse().unwrap_or(addr);
} else {
addr = format!("[{ip}]:0").parse().unwrap_or(addr);
}
}
Ok(response)
};
let ws_stream = tokio_tungstenite::accept_hdr_async(stream, callback).await?;
let (a, mut b) = ws_stream.split();
sink = Some(Sink::Ws(a));
while let Ok(Some(Ok(msg))) = timeout(30_000, b.next()).await {
@@ -1176,14 +1244,11 @@ impl RendezvousServer {
out_sk = sk;
if !key.is_empty() {
key = pk;
} else {
std::env::set_var("KEY_FOR_API", pk);
}
}
if !key.is_empty() {
log::info!("Key: {}", key);
std::env::set_var("KEY_FOR_API", key.clone());
}
(key, out_sk)
}
@@ -1191,8 +1256,16 @@ impl RendezvousServer {
#[inline]
fn is_lan(&self, addr: SocketAddr) -> bool {
if let Some(network) = &self.inner.mask {
if let SocketAddr::V4(addr) = addr {
return network.contains(*addr.ip());
match addr {
SocketAddr::V4(v4_socket_addr) => {
return network.contains(*v4_socket_addr.ip());
}
SocketAddr::V6(v6_socket_addr) => {
if let Some(v4_addr) = v6_socket_addr.ip().to_ipv4() {
return network.contains(v4_addr);
}
}
}
}
false
@@ -1228,6 +1301,15 @@ async fn check_relay_servers(rs0: Arc<RelayServers>, tx: Sender) {
// temp solution to solve udp socket failure
async fn test_hbbs(addr: SocketAddr) -> ResultType<()> {
let mut addr = addr;
if addr.ip().is_unspecified() {
addr.set_ip(if addr.is_ipv4() {
IpAddr::V4(Ipv4Addr::LOCALHOST)
} else {
IpAddr::V6(Ipv6Addr::LOCALHOST)
});
}
let mut socket = FramedSocket::new(config::Config::get_any_listen_addr(addr.is_ipv4())).await?;
let mut msg_out = RendezvousMessage::new();
msg_out.set_register_peer(RegisterPeer {
@@ -1241,8 +1323,7 @@ async fn test_hbbs(addr: SocketAddr) -> ResultType<()> {
tokio::select! {
_ = timer.tick() => {
if last_time_recv.elapsed().as_secs() > 12 {
log::error!("Timeout of test_hbbs");
std::process::exit(1);
bail!("Timeout of test_hbbs");
}
socket.send(&msg_out, addr).await?;
}
@@ -1272,12 +1353,12 @@ async fn send_rk_res(
async fn create_udp_listener(port: i32, rmem: usize) -> ResultType<FramedSocket> {
let addr = SocketAddr::new(IpAddr::V6(Ipv6Addr::UNSPECIFIED), port as _);
if let Ok(s) = FramedSocket::new_reuse(&addr, false, rmem).await {
if let Ok(s) = FramedSocket::new_reuse(&addr, true, rmem).await {
log::debug!("listen on udp {:?}", s.local_addr());
return Ok(s);
}
let addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::UNSPECIFIED), port as _);
let s = FramedSocket::new_reuse(&addr, false, rmem).await?;
let s = FramedSocket::new_reuse(&addr, true, rmem).await?;
log::debug!("listen on udp {:?}", s.local_addr());
Ok(s)
}

View File

@@ -10,7 +10,7 @@ use std::{
fn print_help() {
println!(
"Usage:
rustdesk-util [command]\n
rustdesk-utils [command]\n
Available Commands:
genkeypair Generate a new keypair
validatekeypair [public key] [secret key] Validate an existing keypair

View File

@@ -10,8 +10,8 @@ WorkingDirectory=/var/lib/rustdesk-server/
User=
Group=
Restart=always
StandardOutput=append:/var/log/rustdesk/rustdesk-hbbr.log
StandardError=append:/var/log/rustdesk/rustdesk-hbbr.error
StandardOutput=append:/var/log/rustdesk-server/hbbr.log
StandardError=append:/var/log/rustdesk-server/hbbr.error
# Restart service after 10 seconds if node service crashes
RestartSec=10

View File

@@ -10,8 +10,8 @@ WorkingDirectory=/var/lib/rustdesk-server/
User=
Group=
Restart=always
StandardOutput=append:/var/log/rustdesk/rustdesk-hbbs.log
StandardError=append:/var/log/rustdesk/rustdesk-hbbs.error
StandardOutput=append:/var/log/rustdesk-server/hbbs.log
StandardError=append:/var/log/rustdesk-server/hbbs.error
# Restart service after 10 seconds if node service crashes
RestartSec=10

View File

@@ -15,7 +15,7 @@
!define PRODUCT_NAME "rustdesk_server"
!define PRODUCT_DESCRIPTION "Installer for ${PRODUCT_NAME}"
!define COPYRIGHT "Copyright © 2021"
!define VERSION "1.1.7"
!define VERSION "1.1.15"
VIProductVersion "${VERSION}.0"
VIAddVersionKey "ProductName" "${PRODUCT_NAME}"
@@ -143,10 +143,10 @@ Section "Install"
CreateShortCut "$SMPROGRAMS\${APP_NAME}\Uninstall.lnk" "$INSTDIR\uninstall.exe"
CreateShortCut "$DESKTOP\${APP_NAME}.lnk" "$INSTDIR\${PRODUCT_NAME}.exe"
nsExec::Exec 'netsh advfirewall firewall add rule name="${APP_NAME}" dir=in action=allow program="$INSTDIR\hbbs.exe" enable=yes'
nsExec::Exec 'netsh advfirewall firewall add rule name="${APP_NAME}" dir=out action=allow program="$INSTDIR\hbbs.exe" enable=yes'
nsExec::Exec 'netsh advfirewall firewall add rule name="${APP_NAME}" dir=in action=allow program="$INSTDIR\hbbr.exe" enable=yes'
nsExec::Exec 'netsh advfirewall firewall add rule name="${APP_NAME}" dir=out action=allow program="$INSTDIR\hbbr.exe" enable=yes'
nsExec::Exec 'netsh advfirewall firewall add rule name="${APP_NAME}" dir=in action=allow program="$INSTDIR\bin\hbbs.exe" enable=yes'
nsExec::Exec 'netsh advfirewall firewall add rule name="${APP_NAME}" dir=out action=allow program="$INSTDIR\bin\hbbs.exe" enable=yes'
nsExec::Exec 'netsh advfirewall firewall add rule name="${APP_NAME}" dir=in action=allow program="$INSTDIR\bin\hbbr.exe" enable=yes'
nsExec::Exec 'netsh advfirewall firewall add rule name="${APP_NAME}" dir=out action=allow program="$INSTDIR\bin\hbbr.exe" enable=yes'
ExecWait 'powershell.exe -NoProfile -windowstyle hidden try { [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]::Tls12 } catch {}; Invoke-WebRequest -Uri "https://go.microsoft.com/fwlink/p/?LinkId=2124703" -OutFile "$$env:TEMP\MicrosoftEdgeWebview2Setup.exe" ; Start-Process -FilePath "$$env:TEMP\MicrosoftEdgeWebview2Setup.exe" -ArgumentList ($\'/silent$\', $\'/install$\') -Wait'
SectionEnd