Merge branch 'develop' into travis/report-user

This commit is contained in:
Travis Ralston 2025-02-13 13:20:56 -07:00 committed by GitHub
commit dbe43a1283
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
109 changed files with 2221 additions and 294 deletions

View file

@ -30,7 +30,7 @@ jobs:
run: docker buildx inspect
- name: Install Cosign
uses: sigstore/cosign-installer@v3.7.0
uses: sigstore/cosign-installer@v3.8.0
- name: Checkout repository
uses: actions/checkout@v4

View file

@ -21,7 +21,7 @@ jobs:
# We use nightly so that `fmt` correctly groups together imports, and
# clippy correctly fixes up the benchmarks.
toolchain: nightly-2022-12-01
components: rustfmt
components: clippy, rustfmt
- uses: Swatinem/rust-cache@v2
- name: Setup Poetry

View file

@ -1,3 +1,55 @@
# Synapse 1.124.0 (2025-02-11)
No significant changes since 1.124.0rc3.
# Synapse 1.124.0rc3 (2025-02-07)
### Bugfixes
- Fix regression in performance of sending events due to superfluous reads and locks. Introduced in v1.124.0rc1. ([\#18141](https://github.com/element-hq/synapse/issues/18141))
# Synapse 1.124.0rc2 (2025-02-05)
### Bugfixes
- Fix regression where persisting events in some rooms could fail after a previous unclean shutdown. Introduced in v1.124.0rc1. ([\#18137](https://github.com/element-hq/synapse/issues/18137))
# Synapse 1.124.0rc1 (2025-02-04)
### Bugfixes
- Add rate limit `rc_presence.per_user`. This prevents load from excessive presence updates sent by clients via sync api. Also rate limit `/_matrix/client/v3/presence` as per the spec. Contributed by @rda0. ([\#18000](https://github.com/element-hq/synapse/issues/18000))
- Deactivated users will no longer automatically accept an invite when `auto_accept_invites` is enabled. ([\#18073](https://github.com/element-hq/synapse/issues/18073))
- Fix join being denied after being invited over federation. Also fixes other out-of-band membership transitions. ([\#18075](https://github.com/element-hq/synapse/issues/18075))
- Updates contributed `docker-compose.yml` file to PostgreSQL v15, as v12 is no longer supported by Synapse.
Contributed by @maxkratz. ([\#18089](https://github.com/element-hq/synapse/issues/18089))
- Fix rare edge case where state groups could be deleted while we are persisting new events that reference them. ([\#18107](https://github.com/element-hq/synapse/issues/18107), [\#18130](https://github.com/element-hq/synapse/issues/18130), [\#18131](https://github.com/element-hq/synapse/issues/18131))
- Raise an error if someone is using an incorrect suffix in a config duration string. ([\#18112](https://github.com/element-hq/synapse/issues/18112))
- Fix a bug where the [Delete Room Admin API](https://element-hq.github.io/synapse/latest/admin_api/rooms.html#version-2-new-version) would fail if the `block` parameter was set to `true` and a worker other than the main process was configured to handle background tasks. ([\#18119](https://github.com/element-hq/synapse/issues/18119))
### Internal Changes
- Increase the length of the generated `nonce` parameter when perfoming OIDC logins to comply with the TI-Messenger spec. ([\#18109](https://github.com/element-hq/synapse/issues/18109))
### Updates to locked dependencies
* Bump dawidd6/action-download-artifact from 7 to 8. ([\#18108](https://github.com/element-hq/synapse/issues/18108))
* Bump log from 0.4.22 to 0.4.25. ([\#18098](https://github.com/element-hq/synapse/issues/18098))
* Bump python-multipart from 0.0.18 to 0.0.20. ([\#18096](https://github.com/element-hq/synapse/issues/18096))
* Bump serde_json from 1.0.135 to 1.0.137. ([\#18099](https://github.com/element-hq/synapse/issues/18099))
* Bump types-bleach from 6.1.0.20240331 to 6.2.0.20241123. ([\#18082](https://github.com/element-hq/synapse/issues/18082))
# Synapse 1.123.0 (2025-01-28)
No significant changes since 1.123.0rc1.

138
Cargo.lock generated
View file

@ -35,6 +35,12 @@ version = "0.21.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9d297deb1925b89f2ccc13d7635fa0714f12c87adce1c75356b39ca9b7178567"
[[package]]
name = "bitflags"
version = "2.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8f68f53c83ab957f72c32642f3868eec03eb974d1fb82e453128456482613d36"
[[package]]
name = "blake2"
version = "0.10.6"
@ -61,9 +67,9 @@ checksum = "79296716171880943b8470b5f8d03aa55eb2e645a4874bdbb28adb49162e012c"
[[package]]
name = "bytes"
version = "1.9.0"
version = "1.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "325918d6fe32f23b19878fe4b34794ae41fc19ddbe53b10571a4874d44ffd39b"
checksum = "f61dac84819c6588b558454b194026eb1f09c293b9036ae9b159e74e73ab6cf9"
[[package]]
name = "cfg-if"
@ -119,13 +125,14 @@ dependencies = [
[[package]]
name = "getrandom"
version = "0.2.15"
version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c4567c8db10ae91089c99af84c68c38da3ec2f087c3f82960bcdbf3656b6f4d7"
checksum = "43a49c392881ce6d5c3b8cb70f98717b7c07aabbdff06687b9030dbfbe2725f8"
dependencies = [
"cfg-if",
"libc",
"wasi",
"windows-targets",
]
[[package]]
@ -364,20 +371,20 @@ dependencies = [
[[package]]
name = "rand"
version = "0.8.5"
version = "0.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "34af8d1a0e25924bc5b7c43c079c942339d8f0a8b57c39049bef581b46327404"
checksum = "3779b94aeb87e8bd4e834cee3650289ee9e0d5677f976ecdb6d219e5f4f6cd94"
dependencies = [
"libc",
"rand_chacha",
"rand_core",
"zerocopy",
]
[[package]]
name = "rand_chacha"
version = "0.3.1"
version = "0.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e6c10a63a0fa32252be49d21e7709d4d4baf8d231c2dbce1eaa8141b9b127d88"
checksum = "d3022b5f1df60f26e1ffddd6c66e8aa15de382ae63b3a0c1bfc0e4d3e3f325cb"
dependencies = [
"ppv-lite86",
"rand_core",
@ -385,11 +392,12 @@ dependencies = [
[[package]]
name = "rand_core"
version = "0.6.4"
version = "0.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ec0be4795e2f6a28069bec0b5ff3e2ac9bafc99e6a9a7dc3547996c5c816922c"
checksum = "b08f3c9802962f7e1b25113931d94f43ed9725bebc59db9d0c3e9a23b67e15ff"
dependencies = [
"getrandom",
"zerocopy",
]
[[package]]
@ -449,9 +457,9 @@ dependencies = [
[[package]]
name = "serde_json"
version = "1.0.137"
version = "1.0.138"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "930cfb6e6abf99298aaad7d29abbef7a9999a9a8806a40088f55f0dcec03146b"
checksum = "d434192e7da787e94a6ea7e9670b26a036d0ca41e0b7efb2676dd32bae872949"
dependencies = [
"itoa",
"memchr",
@ -536,9 +544,9 @@ checksum = "42ff0bf0c66b8238c6f3b578df37d0b7848e55df8577b3f74f92a69acceeb825"
[[package]]
name = "ulid"
version = "1.1.4"
version = "1.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f294bff79170ed1c5633812aff1e565c35d993a36e757f9bc0accf5eec4e6045"
checksum = "ab82fc73182c29b02e2926a6df32f2241dbadb5cfc111fd595515b3598f46bb3"
dependencies = [
"rand",
"web-time",
@ -564,9 +572,12 @@ checksum = "49874b5167b65d7193b8aba1567f5c7d93d001cafc34600cee003eda787e483f"
[[package]]
name = "wasi"
version = "0.11.0+wasi-snapshot-preview1"
version = "0.13.3+wasi-0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9c8d87e72b64a3b4db28d11ce29237c246188f4f51057d65a7eab63b7987e423"
checksum = "26816d2e1a4a36a2940b96c5296ce403917633dff8f3440e9b236ed6f6bacad2"
dependencies = [
"wit-bindgen-rt",
]
[[package]]
name = "wasm-bindgen"
@ -631,3 +642,96 @@ dependencies = [
"js-sys",
"wasm-bindgen",
]
[[package]]
name = "windows-targets"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9b724f72796e036ab90c1021d4780d4d3d648aca59e491e6b98e725b84e99973"
dependencies = [
"windows_aarch64_gnullvm",
"windows_aarch64_msvc",
"windows_i686_gnu",
"windows_i686_gnullvm",
"windows_i686_msvc",
"windows_x86_64_gnu",
"windows_x86_64_gnullvm",
"windows_x86_64_msvc",
]
[[package]]
name = "windows_aarch64_gnullvm"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "32a4622180e7a0ec044bb555404c800bc9fd9ec262ec147edd5989ccd0c02cd3"
[[package]]
name = "windows_aarch64_msvc"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "09ec2a7bb152e2252b53fa7803150007879548bc709c039df7627cabbd05d469"
[[package]]
name = "windows_i686_gnu"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8e9b5ad5ab802e97eb8e295ac6720e509ee4c243f69d781394014ebfe8bbfa0b"
[[package]]
name = "windows_i686_gnullvm"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0eee52d38c090b3caa76c563b86c3a4bd71ef1a819287c19d586d7334ae8ed66"
[[package]]
name = "windows_i686_msvc"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "240948bc05c5e7c6dabba28bf89d89ffce3e303022809e73deaefe4f6ec56c66"
[[package]]
name = "windows_x86_64_gnu"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "147a5c80aabfbf0c7d901cb5895d1de30ef2907eb21fbbab29ca94c5b08b1a78"
[[package]]
name = "windows_x86_64_gnullvm"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "24d5b23dc417412679681396f2b49f3de8c1473deb516bd34410872eff51ed0d"
[[package]]
name = "windows_x86_64_msvc"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "589f6da84c646204747d1270a2a5661ea66ed1cced2631d546fdfb155959f9ec"
[[package]]
name = "wit-bindgen-rt"
version = "0.33.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3268f3d866458b787f390cf61f4bbb563b922d091359f9608842999eaee3943c"
dependencies = [
"bitflags",
]
[[package]]
name = "zerocopy"
version = "0.8.17"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "aa91407dacce3a68c56de03abe2760159582b846c6a4acd2f456618087f12713"
dependencies = [
"zerocopy-derive",
]
[[package]]
name = "zerocopy-derive"
version = "0.8.17"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "06718a168365cad3d5ff0bb133aad346959a2074bd4a85c121255a11304a8626"
dependencies = [
"proc-macro2",
"quote",
"syn",
]

6
LICENSE-COMMERCIAL Normal file
View file

@ -0,0 +1,6 @@
Licensees holding a valid commercial license with Element may use this
software in accordance with the terms contained in a written agreement
between you and Element.
To purchase a commercial license please contact our sales team at
licensing@element.io

View file

@ -10,14 +10,15 @@ implementation, written and maintained by `Element <https://element.io>`_.
`Matrix <https://github.com/matrix-org>`__ is the open standard for
secure and interoperable real time communications. You can directly run
and manage the source code in this repository, available under an AGPL
license. There is no support provided from Element unless you have a
subscription.
license (or alternatively under a commercial license from Element).
There is no support provided by Element unless you have a
subscription from Element.
Subscription alternative
========================
Subscription
============
Alternatively, for those that need an enterprise-ready solution, Element
Server Suite (ESS) is `available as a subscription <https://element.io/pricing>`_.
For those that need an enterprise-ready solution, Element
Server Suite (ESS) is `available via subscription <https://element.io/pricing>`_.
ESS builds on Synapse to offer a complete Matrix-based backend including the full
`Admin Console product <https://element.io/enterprise-functionality/admin-console>`_,
giving admins the power to easily manage an organization-wide
@ -249,6 +250,20 @@ Developers might be particularly interested in:
Alongside all that, join our developer community on Matrix:
`#synapse-dev:matrix.org <https://matrix.to/#/#synapse-dev:matrix.org>`_, featuring real humans!
Copyright and Licensing
=======================
Copyright 2014-2017 OpenMarket Ltd
Copyright 2017 Vector Creations Ltd
Copyright 2017-2025 New Vector Ltd
This software is dual-licensed by New Vector Ltd (Element). It can be used either:
(1) for free under the terms of the GNU Affero General Public License (as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version); OR
(2) under the terms of a paid-for Element Commercial License agreement between you and Element (the terms of which may vary depending on what you and Element have agreed to).
Unless required by applicable law or agreed to in writing, software distributed under the Licenses is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the Licenses for the specific language governing permissions and limitations under the Licenses.
.. |support| image:: https://img.shields.io/badge/matrix-community%20support-success
:alt: (get community support in #synapse:matrix.org)

1
changelog.d/17436.doc Normal file
View file

@ -0,0 +1 @@
Add Oracle Linux 8 and 9 installation instructions.

1
changelog.d/17616.misc Normal file
View file

@ -0,0 +1 @@
Overload DatabasePool.simple_select_one_txn to return non-None when the allow_none parameter is False.

View file

@ -0,0 +1 @@
Add functionality to be able to use multiple values in SSO feature `attribute_requirements`.

1
changelog.d/17967.misc Normal file
View file

@ -0,0 +1 @@
Python 3.8 EOL: compile native extensions with the 3.9 ABI and use typing hints from the standard library.

View file

@ -0,0 +1 @@
Add experimental config options `admin_token_path` and `client_secret_path` for MSC 3861.

View file

@ -1 +0,0 @@
Deactivated users will no longer automatically accept an invite when `auto_accept_invites` is enabled.

View file

@ -1 +0,0 @@
Fix join being denied after being invited over federation. Also fixes other out-of-band membership transitions.

View file

@ -1,2 +0,0 @@
Updates contributed `docker-compose.yml` file to PostgreSQL v15, as v12 is no longer supported by Synapse.
Contributed by @maxkratz.

View file

@ -1 +0,0 @@
Increase the length of the generated `nonce` parameter when perfoming OIDC logins to comply with the TI-Messenger spec.

View file

@ -1 +0,0 @@
Raise an error if someone is using an incorrect suffix in a config duration string.

View file

@ -1 +0,0 @@
Fix a bug where the [Delete Room Admin API](https://element-hq.github.io/synapse/latest/admin_api/rooms.html#version-2-new-version) would fail if the `block` parameter was set to `true` and a worker other than the main process was configured to handle background tasks.

1
changelog.d/18122.doc Normal file
View file

@ -0,0 +1 @@
Document missing server config options (`daemonize`, `print_pidfile`, `user_agent_suffix`, `use_frozen_dicts`, `manhole`).

1
changelog.d/18124.misc Normal file
View file

@ -0,0 +1 @@
Add log message when worker lock timeouts get large.

1
changelog.d/18125.bugfix Normal file
View file

@ -0,0 +1 @@
Fix a bug when updating a user 3pid with invalid returns 500 server error change to 400 with a message.

1
changelog.d/18134.misc Normal file
View file

@ -0,0 +1 @@
Make it explicit that you can buy an AGPL-alternative commercial license from Element.

1
changelog.d/18135.bugfix Normal file
View file

@ -0,0 +1 @@
Fix user directory search when using a legacy module with a `check_username_for_spam` callback. Broke in v1.122.0.

1
changelog.d/18136.misc Normal file
View file

@ -0,0 +1 @@
Fix the 'Fix linting' GitHub Actions workflow.

1
changelog.d/18139.misc Normal file
View file

@ -0,0 +1 @@
Do not log at the exception-level when clients provide empty `since` token to `/sync` API.

View file

@ -1 +1 @@
Add rate limit `rc_presence.per_user`. This prevents load from excessive presence updates sent by clients via sync api. Also rate limit `/_matrix/client/v3/presence` as per the spec. Contributed by @rda0.
Add rate limit `rc_presence.per_user`. This prevents load from excessive presence updates sent by clients via sync api. Also rate limit `/_matrix/client/v3/presence` as per the spec. Contributed by @rda0.

24
debian/changelog vendored
View file

@ -1,3 +1,27 @@
matrix-synapse-py3 (1.124.0) stable; urgency=medium
* New Synapse release 1.124.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 11 Feb 2025 11:55:22 +0100
matrix-synapse-py3 (1.124.0~rc3) stable; urgency=medium
* New Synapse release 1.124.0rc3.
-- Synapse Packaging team <packages@matrix.org> Fri, 07 Feb 2025 13:42:55 +0000
matrix-synapse-py3 (1.124.0~rc2) stable; urgency=medium
* New Synapse release 1.124.0rc2.
-- Synapse Packaging team <packages@matrix.org> Wed, 05 Feb 2025 16:35:53 +0000
matrix-synapse-py3 (1.124.0~rc1) stable; urgency=medium
* New Synapse release 1.124.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 04 Feb 2025 11:53:05 +0000
matrix-synapse-py3 (1.123.0) stable; urgency=medium
* New Synapse release 1.123.0.

View file

@ -138,6 +138,10 @@ for port in 8080 8081 8082; do
per_user:
per_second: 1000
burst_count: 1000
rc_presence:
per_user:
per_second: 1000
burst_count: 1000
RC
)
echo "${ratelimiting}" >> "$port.config"

View file

@ -310,29 +310,18 @@ sudo dnf install libtiff-devel libjpeg-devel libzip-devel freetype-devel \
sudo dnf group install "Development Tools"
```
##### Red Hat Enterprise Linux / Rocky Linux
##### Red Hat Enterprise Linux / Rocky Linux / Oracle Linux
*Note: The term "RHEL" below refers to both Red Hat Enterprise Linux and Rocky Linux. The distributions are 1:1 binary compatible.*
*Note: The term "RHEL" below refers to Red Hat Enterprise Linux, Oracle Linux and Rocky Linux. The distributions are 1:1 binary compatible.*
It's recommended to use the latest Python versions.
RHEL 8 in particular ships with Python 3.6 by default which is EOL and therefore no longer supported by Synapse. RHEL 9 ship with Python 3.9 which is still supported by the Python core team as of this writing. However, newer Python versions provide significant performance improvements and they're available in official distributions' repositories. Therefore it's recommended to use them.
RHEL 8 in particular ships with Python 3.6 by default which is EOL and therefore no longer supported by Synapse. RHEL 9 ships with Python 3.9 which is still supported by the Python core team as of this writing. However, newer Python versions provide significant performance improvements and they're available in official distributions' repositories. Therefore it's recommended to use them.
Python 3.11 and 3.12 are available for both RHEL 8 and 9.
These commands should be run as root user.
RHEL 8
```bash
# Enable PowerTools repository
dnf config-manager --set-enabled powertools
```
RHEL 9
```bash
# Enable CodeReady Linux Builder repository
crb enable
```
Install new version of Python. You only need one of these:
```bash
# Python 3.11

View file

@ -162,6 +162,53 @@ Example configuration:
pid_file: DATADIR/homeserver.pid
```
---
### `daemonize`
Specifies whether Synapse should be started as a daemon process. If Synapse is being
managed by [systemd](../../systemd-with-workers/), this option must be omitted or set to
`false`.
This can also be set by the `--daemonize` (`-D`) argument when starting Synapse.
See `worker_daemonize` for more information on daemonizing workers.
Example configuration:
```yaml
daemonize: true
```
---
### `print_pidfile`
Print the path to the pidfile just before daemonizing. Defaults to false.
This can also be set by the `--print-pidfile` argument when starting Synapse.
Example configuration:
```yaml
print_pidfile: true
```
---
### `user_agent_suffix`
A suffix that is appended to the Synapse user-agent (ex. `Synapse/v1.123.0`). Defaults
to None
Example configuration:
```yaml
user_agent_suffix: " (I'm a teapot; Linux x86_64)"
```
---
### `use_frozen_dicts`
Determines whether we should freeze the internal dict object in `FrozenEvent`. Freezing
prevents bugs where we accidentally share e.g. signature dicts. However, freezing a
dict is expensive. Defaults to false.
Example configuration:
```yaml
use_frozen_dicts: true
```
---
### `web_client_location`
The absolute URL to the web client which `/` will redirect to. Defaults to none.
@ -595,6 +642,17 @@ listeners:
- names: [client, federation]
```
---
### `manhole`
Turn on the Twisted telnet manhole service on the given port. Defaults to none.
This can also be set by the `--manhole` argument when starting Synapse.
Example configuration:
```yaml
manhole: 1234
```
---
### `manhole_settings`
@ -3337,8 +3395,9 @@ This setting has the following sub-options:
The default is 'uid'.
* `attribute_requirements`: It is possible to configure Synapse to only allow logins if SAML attributes
match particular values. The requirements can be listed under
`attribute_requirements` as shown in the example. All of the listed attributes must
match for the login to be permitted.
`attribute_requirements` as shown in the example. All of the listed attributes must
match for the login to be permitted. Values can be specified in a `one_of` list to allow
multiple values for an attribute.
* `idp_entityid`: If the metadata XML contains multiple IdP entities then the `idp_entityid`
option must be set to the entity to redirect users to.
Most deployments only have a single IdP entity and so should omit this option.
@ -3419,7 +3478,9 @@ saml2_config:
- attribute: userGroup
value: "staff"
- attribute: department
value: "sales"
one_of:
- "sales"
- "admins"
idp_entityid: 'https://our_idp/entityid'
```

68
poetry.lock generated
View file

@ -64,38 +64,36 @@ visualize = ["Twisted (>=16.1.1)", "graphviz (>0.5.1)"]
[[package]]
name = "bcrypt"
version = "4.2.0"
version = "4.2.1"
description = "Modern password hashing for your software and your servers"
optional = false
python-versions = ">=3.7"
files = [
{file = "bcrypt-4.2.0-cp37-abi3-macosx_10_12_universal2.whl", hash = "sha256:096a15d26ed6ce37a14c1ac1e48119660f21b24cba457f160a4b830f3fe6b5cb"},
{file = "bcrypt-4.2.0-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c02d944ca89d9b1922ceb8a46460dd17df1ba37ab66feac4870f6862a1533c00"},
{file = "bcrypt-4.2.0-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1d84cf6d877918620b687b8fd1bf7781d11e8a0998f576c7aa939776b512b98d"},
{file = "bcrypt-4.2.0-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:1bb429fedbe0249465cdd85a58e8376f31bb315e484f16e68ca4c786dcc04291"},
{file = "bcrypt-4.2.0-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:655ea221910bcac76ea08aaa76df427ef8625f92e55a8ee44fbf7753dbabb328"},
{file = "bcrypt-4.2.0-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:1ee38e858bf5d0287c39b7a1fc59eec64bbf880c7d504d3a06a96c16e14058e7"},
{file = "bcrypt-4.2.0-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:0da52759f7f30e83f1e30a888d9163a81353ef224d82dc58eb5bb52efcabc399"},
{file = "bcrypt-4.2.0-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:3698393a1b1f1fd5714524193849d0c6d524d33523acca37cd28f02899285060"},
{file = "bcrypt-4.2.0-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:762a2c5fb35f89606a9fde5e51392dad0cd1ab7ae64149a8b935fe8d79dd5ed7"},
{file = "bcrypt-4.2.0-cp37-abi3-win32.whl", hash = "sha256:5a1e8aa9b28ae28020a3ac4b053117fb51c57a010b9f969603ed885f23841458"},
{file = "bcrypt-4.2.0-cp37-abi3-win_amd64.whl", hash = "sha256:8f6ede91359e5df88d1f5c1ef47428a4420136f3ce97763e31b86dd8280fbdf5"},
{file = "bcrypt-4.2.0-cp39-abi3-macosx_10_12_universal2.whl", hash = "sha256:c52aac18ea1f4a4f65963ea4f9530c306b56ccd0c6f8c8da0c06976e34a6e841"},
{file = "bcrypt-4.2.0-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3bbbfb2734f0e4f37c5136130405332640a1e46e6b23e000eeff2ba8d005da68"},
{file = "bcrypt-4.2.0-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3413bd60460f76097ee2e0a493ccebe4a7601918219c02f503984f0a7ee0aebe"},
{file = "bcrypt-4.2.0-cp39-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:8d7bb9c42801035e61c109c345a28ed7e84426ae4865511eb82e913df18f58c2"},
{file = "bcrypt-4.2.0-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:3d3a6d28cb2305b43feac298774b997e372e56c7c7afd90a12b3dc49b189151c"},
{file = "bcrypt-4.2.0-cp39-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:9c1c4ad86351339c5f320ca372dfba6cb6beb25e8efc659bedd918d921956bae"},
{file = "bcrypt-4.2.0-cp39-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:27fe0f57bb5573104b5a6de5e4153c60814c711b29364c10a75a54bb6d7ff48d"},
{file = "bcrypt-4.2.0-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:8ac68872c82f1add6a20bd489870c71b00ebacd2e9134a8aa3f98a0052ab4b0e"},
{file = "bcrypt-4.2.0-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:cb2a8ec2bc07d3553ccebf0746bbf3d19426d1c6d1adbd4fa48925f66af7b9e8"},
{file = "bcrypt-4.2.0-cp39-abi3-win32.whl", hash = "sha256:77800b7147c9dc905db1cba26abe31e504d8247ac73580b4aa179f98e6608f34"},
{file = "bcrypt-4.2.0-cp39-abi3-win_amd64.whl", hash = "sha256:61ed14326ee023917ecd093ee6ef422a72f3aec6f07e21ea5f10622b735538a9"},
{file = "bcrypt-4.2.0-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:39e1d30c7233cfc54f5c3f2c825156fe044efdd3e0b9d309512cc514a263ec2a"},
{file = "bcrypt-4.2.0-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:f4f4acf526fcd1c34e7ce851147deedd4e26e6402369304220250598b26448db"},
{file = "bcrypt-4.2.0-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:1ff39b78a52cf03fdf902635e4c81e544714861ba3f0efc56558979dd4f09170"},
{file = "bcrypt-4.2.0-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:373db9abe198e8e2c70d12b479464e0d5092cc122b20ec504097b5f2297ed184"},
{file = "bcrypt-4.2.0.tar.gz", hash = "sha256:cf69eaf5185fd58f268f805b505ce31f9b9fc2d64b376642164e9244540c1221"},
{file = "bcrypt-4.2.1-cp37-abi3-macosx_10_12_universal2.whl", hash = "sha256:1340411a0894b7d3ef562fb233e4b6ed58add185228650942bdc885362f32c17"},
{file = "bcrypt-4.2.1-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b1ee315739bc8387aa36ff127afc99120ee452924e0df517a8f3e4c0187a0f5f"},
{file = "bcrypt-4.2.1-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8dbd0747208912b1e4ce730c6725cb56c07ac734b3629b60d4398f082ea718ad"},
{file = "bcrypt-4.2.1-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:aaa2e285be097050dba798d537b6efd9b698aa88eef52ec98d23dcd6d7cf6fea"},
{file = "bcrypt-4.2.1-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:76d3e352b32f4eeb34703370e370997065d28a561e4a18afe4fef07249cb4396"},
{file = "bcrypt-4.2.1-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:b7703ede632dc945ed1172d6f24e9f30f27b1b1a067f32f68bf169c5f08d0425"},
{file = "bcrypt-4.2.1-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:89df2aea2c43be1e1fa066df5f86c8ce822ab70a30e4c210968669565c0f4685"},
{file = "bcrypt-4.2.1-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:04e56e3fe8308a88b77e0afd20bec516f74aecf391cdd6e374f15cbed32783d6"},
{file = "bcrypt-4.2.1-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:cfdf3d7530c790432046c40cda41dfee8c83e29482e6a604f8930b9930e94139"},
{file = "bcrypt-4.2.1-cp37-abi3-win32.whl", hash = "sha256:adadd36274510a01f33e6dc08f5824b97c9580583bd4487c564fc4617b328005"},
{file = "bcrypt-4.2.1-cp37-abi3-win_amd64.whl", hash = "sha256:8c458cd103e6c5d1d85cf600e546a639f234964d0228909d8f8dbeebff82d526"},
{file = "bcrypt-4.2.1-cp39-abi3-macosx_10_12_universal2.whl", hash = "sha256:8ad2f4528cbf0febe80e5a3a57d7a74e6635e41af1ea5675282a33d769fba413"},
{file = "bcrypt-4.2.1-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:909faa1027900f2252a9ca5dfebd25fc0ef1417943824783d1c8418dd7d6df4a"},
{file = "bcrypt-4.2.1-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cde78d385d5e93ece5479a0a87f73cd6fa26b171c786a884f955e165032b262c"},
{file = "bcrypt-4.2.1-cp39-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:533e7f3bcf2f07caee7ad98124fab7499cb3333ba2274f7a36cf1daee7409d99"},
{file = "bcrypt-4.2.1-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:687cf30e6681eeda39548a93ce9bfbb300e48b4d445a43db4298d2474d2a1e54"},
{file = "bcrypt-4.2.1-cp39-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:041fa0155c9004eb98a232d54da05c0b41d4b8e66b6fc3cb71b4b3f6144ba837"},
{file = "bcrypt-4.2.1-cp39-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:f85b1ffa09240c89aa2e1ae9f3b1c687104f7b2b9d2098da4e923f1b7082d331"},
{file = "bcrypt-4.2.1-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:c6f5fa3775966cca251848d4d5393ab016b3afed251163c1436fefdec3b02c84"},
{file = "bcrypt-4.2.1-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:807261df60a8b1ccd13e6599c779014a362ae4e795f5c59747f60208daddd96d"},
{file = "bcrypt-4.2.1-cp39-abi3-win32.whl", hash = "sha256:b588af02b89d9fad33e5f98f7838bf590d6d692df7153647724a7f20c186f6bf"},
{file = "bcrypt-4.2.1-cp39-abi3-win_amd64.whl", hash = "sha256:e84e0e6f8e40a242b11bce56c313edc2be121cec3e0ec2d76fce01f6af33c07c"},
{file = "bcrypt-4.2.1-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:76132c176a6d9953cdc83c296aeaed65e1a708485fd55abf163e0d9f8f16ce0e"},
{file = "bcrypt-4.2.1-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:e158009a54c4c8bc91d5e0da80920d048f918c61a581f0a63e4e93bb556d362f"},
{file = "bcrypt-4.2.1.tar.gz", hash = "sha256:6765386e3ab87f569b276988742039baab087b2cdb01e809d74e74503c2faafe"},
]
[package.extras]
@ -472,20 +470,20 @@ smmap = ">=3.0.1,<6"
[[package]]
name = "gitpython"
version = "3.1.43"
version = "3.1.44"
description = "GitPython is a Python library used to interact with Git repositories"
optional = false
python-versions = ">=3.7"
files = [
{file = "GitPython-3.1.43-py3-none-any.whl", hash = "sha256:eec7ec56b92aad751f9912a73404bc02ba212a23adb2c7098ee668417051a1ff"},
{file = "GitPython-3.1.43.tar.gz", hash = "sha256:35f314a9f878467f5453cc1fee295c3e18e52f1b99f10f6cf5b1682e968a9e7c"},
{file = "GitPython-3.1.44-py3-none-any.whl", hash = "sha256:9e0e10cda9bed1ee64bc9a6de50e7e38a9c9943241cd7f585f6df3ed28011110"},
{file = "gitpython-3.1.44.tar.gz", hash = "sha256:c87e30b26253bf5418b01b0660f818967f3c503193838337fe5e573331249269"},
]
[package.dependencies]
gitdb = ">=4.0.1,<5"
[package.extras]
doc = ["sphinx (==4.3.2)", "sphinx-autodoc-typehints", "sphinx-rtd-theme", "sphinxcontrib-applehelp (>=1.0.2,<=1.0.4)", "sphinxcontrib-devhelp (==1.0.2)", "sphinxcontrib-htmlhelp (>=2.0.0,<=2.0.1)", "sphinxcontrib-qthelp (==1.0.3)", "sphinxcontrib-serializinghtml (==1.1.5)"]
doc = ["sphinx (>=7.1.2,<7.2)", "sphinx-autodoc-typehints", "sphinx_rtd_theme"]
test = ["coverage[toml]", "ddt (>=1.1.1,!=1.4.3)", "mock", "mypy", "pre-commit", "pytest (>=7.3.1)", "pytest-cov", "pytest-instafail", "pytest-mock", "pytest-sugar", "typing-extensions"]
[[package]]
@ -2829,13 +2827,13 @@ types-cffi = "*"
[[package]]
name = "types-pyyaml"
version = "6.0.12.20240917"
version = "6.0.12.20241230"
description = "Typing stubs for PyYAML"
optional = false
python-versions = ">=3.8"
files = [
{file = "types-PyYAML-6.0.12.20240917.tar.gz", hash = "sha256:d1405a86f9576682234ef83bcb4e6fff7c9305c8b1fbad5e0bcd4f7dbdc9c587"},
{file = "types_PyYAML-6.0.12.20240917-py3-none-any.whl", hash = "sha256:392b267f1c0fe6022952462bf5d6523f31e37f6cea49b14cee7ad634b6301570"},
{file = "types_PyYAML-6.0.12.20241230-py3-none-any.whl", hash = "sha256:fa4d32565219b68e6dee5f67534c722e53c00d1cfc09c435ef04d7353e1e96e6"},
{file = "types_pyyaml-6.0.12.20241230.tar.gz", hash = "sha256:7f07622dbd34bb9c8b264fe860a17e0efcad00d50b5f27e93984909d9363498c"},
]
[[package]]

View file

@ -97,7 +97,7 @@ module-name = "synapse.synapse_rust"
[tool.poetry]
name = "matrix-synapse"
version = "1.123.0"
version = "1.124.0"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "AGPL-3.0-or-later"

View file

@ -34,7 +34,7 @@ pyo3 = { version = "0.23.2", features = [
"macros",
"anyhow",
"abi3",
"abi3-py38",
"abi3-py39",
] }
pyo3-log = "0.12.0"
pythonize = "0.23.0"

View file

@ -42,12 +42,12 @@ from typing import (
Set,
Tuple,
Type,
TypedDict,
TypeVar,
cast,
)
import yaml
from typing_extensions import TypedDict
from twisted.internet import defer, reactor as reactor_

View file

@ -18,9 +18,7 @@
# [This file includes modifications made by New Vector Limited]
#
#
from typing import TYPE_CHECKING, Optional, Tuple
from typing_extensions import Protocol
from typing import TYPE_CHECKING, Optional, Protocol, Tuple
from twisted.web.server import Request

View file

@ -19,7 +19,7 @@
#
#
import logging
from typing import TYPE_CHECKING, Any, Dict, List, Optional
from typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional
from urllib.parse import urlencode
from authlib.oauth2 import ClientAuth
@ -119,7 +119,7 @@ class MSC3861DelegatedAuth(BaseAuth):
self._clock = hs.get_clock()
self._http_client = hs.get_proxied_http_client()
self._hostname = hs.hostname
self._admin_token = self._config.admin_token
self._admin_token: Callable[[], Optional[str]] = self._config.admin_token
self._issuer_metadata = RetryOnExceptionCachedCall[OpenIDProviderMetadata](
self._load_metadata
@ -133,9 +133,10 @@ class MSC3861DelegatedAuth(BaseAuth):
)
else:
# Else use the client secret
assert self._config.client_secret, "No client_secret provided"
client_secret = self._config.client_secret()
assert client_secret, "No client_secret provided"
self._client_auth = ClientAuth(
self._config.client_id, self._config.client_secret, auth_method
self._config.client_id, client_secret, auth_method
)
async def _load_metadata(self) -> OpenIDProviderMetadata:
@ -283,7 +284,7 @@ class MSC3861DelegatedAuth(BaseAuth):
requester = await self.get_user_by_access_token(access_token, allow_expired)
# Do not record requests from MAS using the virtual `__oidc_admin` user.
if access_token != self._admin_token:
if access_token != self._admin_token():
await self._record_request(request, requester)
if not allow_guest and requester.is_guest:
@ -324,7 +325,8 @@ class MSC3861DelegatedAuth(BaseAuth):
token: str,
allow_expired: bool = False,
) -> Requester:
if self._admin_token is not None and token == self._admin_token:
admin_token = self._admin_token()
if admin_token is not None and token == admin_token:
# XXX: This is a temporary solution so that the admin API can be called by
# the OIDC provider. This will be removed once we have OIDC client
# credentials grant support in matrix-authentication-service.

View file

@ -20,14 +20,15 @@
#
import enum
from typing import TYPE_CHECKING, Any, Optional
from functools import cache
from typing import TYPE_CHECKING, Any, Iterable, Optional
import attr
import attr.validators
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersions
from synapse.config import ConfigError
from synapse.config._base import Config, RootConfig
from synapse.config._base import Config, RootConfig, read_file
from synapse.types import JsonDict
# Determine whether authlib is installed.
@ -43,6 +44,12 @@ if TYPE_CHECKING:
from authlib.jose.rfc7517 import JsonWebKey
@cache
def read_secret_from_file_once(file_path: Any, config_path: Iterable[str]) -> str:
"""Returns the memoized secret read from file."""
return read_file(file_path, config_path).strip()
class ClientAuthMethod(enum.Enum):
"""List of supported client auth methods."""
@ -63,6 +70,40 @@ def _parse_jwks(jwks: Optional[JsonDict]) -> Optional["JsonWebKey"]:
return JsonWebKey.import_key(jwks)
def _check_client_secret(
instance: "MSC3861", _attribute: attr.Attribute, _value: Optional[str]
) -> None:
if instance._client_secret and instance._client_secret_path:
raise ConfigError(
(
"You have configured both "
"`experimental_features.msc3861.client_secret` and "
"`experimental_features.msc3861.client_secret_path`. "
"These are mutually incompatible."
),
("experimental", "msc3861", "client_secret"),
)
# Check client secret can be retrieved
instance.client_secret()
def _check_admin_token(
instance: "MSC3861", _attribute: attr.Attribute, _value: Optional[str]
) -> None:
if instance._admin_token and instance._admin_token_path:
raise ConfigError(
(
"You have configured both "
"`experimental_features.msc3861.admin_token` and "
"`experimental_features.msc3861.admin_token_path`. "
"These are mutually incompatible."
),
("experimental", "msc3861", "admin_token"),
)
# Check client secret can be retrieved
instance.admin_token()
@attr.s(slots=True, frozen=True)
class MSC3861:
"""Configuration for MSC3861: Matrix architecture change to delegate authentication via OIDC"""
@ -97,15 +138,30 @@ class MSC3861:
)
"""The auth method used when calling the introspection endpoint."""
client_secret: Optional[str] = attr.ib(
_client_secret: Optional[str] = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(str)),
validator=[
attr.validators.optional(attr.validators.instance_of(str)),
_check_client_secret,
],
)
"""
The client secret to use when calling the introspection endpoint,
when using any of the client_secret_* client auth methods.
"""
_client_secret_path: Optional[str] = attr.ib(
default=None,
validator=[
attr.validators.optional(attr.validators.instance_of(str)),
_check_client_secret,
],
)
"""
Alternative to `client_secret`: allows the secret to be specified in an
external file.
"""
jwk: Optional["JsonWebKey"] = attr.ib(default=None, converter=_parse_jwks)
"""
The JWKS to use when calling the introspection endpoint,
@ -133,7 +189,7 @@ class MSC3861:
ClientAuthMethod.CLIENT_SECRET_BASIC,
ClientAuthMethod.CLIENT_SECRET_JWT,
)
and self.client_secret is None
and self.client_secret() is None
):
raise ConfigError(
f"A client secret must be provided when using the {value} client auth method",
@ -152,15 +208,48 @@ class MSC3861:
)
"""The URL of the My Account page on the OIDC Provider as per MSC2965."""
admin_token: Optional[str] = attr.ib(
_admin_token: Optional[str] = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(str)),
validator=[
attr.validators.optional(attr.validators.instance_of(str)),
_check_admin_token,
],
)
"""
A token that should be considered as an admin token.
This is used by the OIDC provider, to make admin calls to Synapse.
"""
_admin_token_path: Optional[str] = attr.ib(
default=None,
validator=[
attr.validators.optional(attr.validators.instance_of(str)),
_check_admin_token,
],
)
"""
Alternative to `admin_token`: allows the secret to be specified in an
external file.
"""
def client_secret(self) -> Optional[str]:
"""Returns the secret given via `client_secret` or `client_secret_path`."""
if self._client_secret_path:
return read_secret_from_file_once(
self._client_secret_path,
("experimental_features", "msc3861", "client_secret_path"),
)
return self._client_secret
def admin_token(self) -> Optional[str]:
"""Returns the admin token given via `admin_token` or `admin_token_path`."""
if self._admin_token_path:
return read_secret_from_file_once(
self._admin_token_path,
("experimental_features", "msc3861", "admin_token_path"),
)
return self._admin_token
def check_config_conflicts(self, root: RootConfig) -> None:
"""Checks for any configuration conflicts with other parts of Synapse.

View file

@ -19,7 +19,7 @@
#
#
import logging
from typing import Any, Dict, Optional
from typing import Any, Dict, List, Optional
import attr
@ -43,13 +43,23 @@ class SsoAttributeRequirement:
"""Object describing a single requirement for SSO attributes."""
attribute: str
# If a value is not given, than the attribute must simply exist.
value: Optional[str]
# If neither value nor one_of is given, the attribute must simply exist. This is
# only true for CAS configs which use a different JSON schema than the one below.
value: Optional[str] = None
one_of: Optional[List[str]] = None
JSON_SCHEMA = {
"type": "object",
"properties": {"attribute": {"type": "string"}, "value": {"type": "string"}},
"required": ["attribute", "value"],
"properties": {
"attribute": {"type": "string"},
"value": {"type": "string"},
"one_of": {"type": "array", "items": {"type": "string"}},
},
"required": ["attribute"],
"oneOf": [
{"required": ["value"]},
{"required": ["one_of"]},
],
}

View file

@ -32,6 +32,7 @@ from typing import (
Mapping,
MutableMapping,
Optional,
Protocol,
Set,
Tuple,
Union,
@ -41,7 +42,6 @@ from typing import (
from canonicaljson import encode_canonical_json
from signedjson.key import decode_verify_key_bytes
from signedjson.sign import SignatureVerifyException, verify_signed_json
from typing_extensions import Protocol
from unpaddedbase64 import decode_base64
from synapse.api.constants import (

View file

@ -30,6 +30,7 @@ from typing import (
Generic,
Iterable,
List,
Literal,
Optional,
Tuple,
Type,
@ -39,7 +40,6 @@ from typing import (
)
import attr
from typing_extensions import Literal
from unpaddedbase64 import encode_base64
from synapse.api.constants import EventTypes, RelationTypes

View file

@ -139,13 +139,13 @@ from typing import (
Hashable,
Iterable,
List,
Literal,
Optional,
Tuple,
)
import attr
from prometheus_client import Counter
from typing_extensions import Literal
from twisted.internet import defer

View file

@ -20,9 +20,7 @@
#
#
import logging
from typing import TYPE_CHECKING, Dict, Iterable, List, Optional, Tuple, Type
from typing_extensions import Literal
from typing import TYPE_CHECKING, Dict, Iterable, List, Literal, Optional, Tuple, Type
from synapse.api.errors import FederationDeniedError, SynapseError
from synapse.federation.transport.server._base import (

View file

@ -24,6 +24,7 @@ from typing import (
TYPE_CHECKING,
Dict,
List,
Literal,
Mapping,
Optional,
Sequence,
@ -32,8 +33,6 @@ from typing import (
Union,
)
from typing_extensions import Literal
from synapse.api.constants import Direction, EduTypes
from synapse.api.errors import Codes, SynapseError
from synapse.api.room_versions import RoomVersions

View file

@ -1579,7 +1579,10 @@ class AuthHandler:
# for the presence of an email address during password reset was
# case sensitive).
if medium == "email":
address = canonicalise_email(address)
try:
address = canonicalise_email(address)
except ValueError as e:
raise SynapseError(400, str(e))
await self.store.user_add_threepid(
user_id, medium, address, validated_at, self.hs.get_clock().time_msec()
@ -1610,7 +1613,10 @@ class AuthHandler:
"""
# 'Canonicalise' email addresses as per above
if medium == "email":
address = canonicalise_email(address)
try:
address = canonicalise_email(address)
except ValueError as e:
raise SynapseError(400, str(e))
await self.store.user_delete_threepid(user_id, medium, address)

View file

@ -21,9 +21,7 @@
import logging
import string
from typing import TYPE_CHECKING, Iterable, List, Optional, Sequence
from typing_extensions import Literal
from typing import TYPE_CHECKING, Iterable, List, Literal, Optional, Sequence
from synapse.api.constants import MAX_ALIAS_LENGTH, EventTypes
from synapse.api.errors import (

View file

@ -20,9 +20,7 @@
#
import logging
from typing import TYPE_CHECKING, Dict, Optional, cast
from typing_extensions import Literal
from typing import TYPE_CHECKING, Dict, Literal, Optional, cast
from synapse.api.errors import (
Codes,

View file

@ -151,6 +151,8 @@ class FederationEventHandler:
def __init__(self, hs: "HomeServer"):
self._clock = hs.get_clock()
self._store = hs.get_datastores().main
self._state_store = hs.get_datastores().state
self._state_deletion_store = hs.get_datastores().state_deletion
self._storage_controllers = hs.get_storage_controllers()
self._state_storage_controller = self._storage_controllers.state
@ -580,7 +582,9 @@ class FederationEventHandler:
room_version.identifier,
state_maps_to_resolve,
event_map=None,
state_res_store=StateResolutionStore(self._store),
state_res_store=StateResolutionStore(
self._store, self._state_deletion_store
),
)
)
else:
@ -1179,7 +1183,9 @@ class FederationEventHandler:
room_version,
state_maps,
event_map={event_id: event},
state_res_store=StateResolutionStore(self._store),
state_res_store=StateResolutionStore(
self._store, self._state_deletion_store
),
)
except Exception as e:
@ -1874,7 +1880,9 @@ class FederationEventHandler:
room_version,
[local_state_id_map, claimed_auth_events_id_map],
event_map=None,
state_res_store=StateResolutionStore(self._store),
state_res_store=StateResolutionStore(
self._store, self._state_deletion_store
),
)
)
else:
@ -2014,7 +2022,9 @@ class FederationEventHandler:
room_version,
state_sets,
event_map=None,
state_res_store=StateResolutionStore(self._store),
state_res_store=StateResolutionStore(
self._store, self._state_deletion_store
),
)
)
else:

View file

@ -31,6 +31,7 @@ from typing import (
List,
Optional,
Type,
TypedDict,
TypeVar,
Union,
)
@ -52,7 +53,6 @@ from pymacaroons.exceptions import (
MacaroonInitException,
MacaroonInvalidSignatureException,
)
from typing_extensions import TypedDict
from twisted.web.client import readBody
from twisted.web.http_headers import Headers

View file

@ -23,10 +23,9 @@
"""Contains functions for registering clients."""
import logging
from typing import TYPE_CHECKING, Iterable, List, Optional, Tuple
from typing import TYPE_CHECKING, Iterable, List, Optional, Tuple, TypedDict
from prometheus_client import Counter
from typing_extensions import TypedDict
from synapse import types
from synapse.api.constants import (

View file

@ -33,12 +33,12 @@ from typing import (
Mapping,
NoReturn,
Optional,
Protocol,
Set,
)
from urllib.parse import urlencode
import attr
from typing_extensions import Protocol
from twisted.web.iweb import IRequest
from twisted.web.server import Request
@ -1277,12 +1277,16 @@ def _check_attribute_requirement(
return False
# If the requirement is None, the attribute existing is enough.
if req.value is None:
if req.value is None and req.one_of is None:
return True
values = attributes[req.attribute]
if req.value in values:
return True
if req.one_of:
for value in req.one_of:
if value in values:
return True
logger.info(
"SSO attribute %s did not match required value '%s' (was '%s')",

View file

@ -19,6 +19,7 @@
#
#
import logging
import random
from types import TracebackType
from typing import (
@ -269,6 +270,10 @@ class WaitingLock:
def _get_next_retry_interval(self) -> float:
next = self._retry_interval
self._retry_interval = max(5, next * 2)
if self._retry_interval > 5 * 2 ^ 7: # ~10 minutes
logging.warning(
f"Lock timeout is getting excessive: {self._retry_interval}s. There may be a deadlock."
)
return next * random.uniform(0.9, 1.1)
@ -344,4 +349,8 @@ class WaitingMultiLock:
def _get_next_retry_interval(self) -> float:
next = self._retry_interval
self._retry_interval = max(5, next * 2)
if self._retry_interval > 5 * 2 ^ 7: # ~10 minutes
logging.warning(
f"Lock timeout is getting excessive: {self._retry_interval}s. There may be a deadlock."
)
return next * random.uniform(0.9, 1.1)

View file

@ -31,6 +31,7 @@ from typing import (
List,
Mapping,
Optional,
Protocol,
Tuple,
Union,
)
@ -40,7 +41,6 @@ import treq
from canonicaljson import encode_canonical_json
from netaddr import AddrFormatError, IPAddress, IPSet
from prometheus_client import Counter
from typing_extensions import Protocol
from zope.interface import implementer
from OpenSSL import SSL

View file

@ -34,6 +34,7 @@ from typing import (
Dict,
Generic,
List,
Literal,
Optional,
TextIO,
Tuple,
@ -48,7 +49,6 @@ import treq
from canonicaljson import encode_canonical_json
from prometheus_client import Counter
from signedjson.sign import sign_json
from typing_extensions import Literal
from twisted.internet import defer
from twisted.internet.error import DNSLookupError

View file

@ -39,6 +39,7 @@ from typing import (
List,
Optional,
Pattern,
Protocol,
Tuple,
Union,
)
@ -46,7 +47,6 @@ from typing import (
import attr
import jinja2
from canonicaljson import encode_canonical_json
from typing_extensions import Protocol
from zope.interface import implementer
from twisted.internet import defer, interfaces

View file

@ -28,6 +28,7 @@ from http import HTTPStatus
from typing import (
TYPE_CHECKING,
List,
Literal,
Mapping,
Optional,
Sequence,
@ -37,8 +38,6 @@ from typing import (
overload,
)
from typing_extensions import Literal
from twisted.web.server import Request
from synapse._pydantic_compat import (

View file

@ -40,6 +40,7 @@ from typing import (
Any,
Awaitable,
Callable,
Literal,
Optional,
Tuple,
Type,
@ -49,7 +50,7 @@ from typing import (
)
import attr
from typing_extensions import Literal, ParamSpec
from typing_extensions import ParamSpec
from twisted.internet import defer, threads
from twisted.python.threadpool import ThreadPool

View file

@ -19,8 +19,7 @@
#
#
import logging
from typing_extensions import Literal
from typing import Literal
class MetadataFilter(logging.Filter):

View file

@ -23,11 +23,10 @@ import ctypes
import logging
import os
import re
from typing import Iterable, Optional, overload
from typing import Iterable, Literal, Optional, overload
import attr
from prometheus_client import REGISTRY, Metric
from typing_extensions import Literal
from synapse.metrics import GaugeMetricFamily
from synapse.metrics._types import Collector

View file

@ -19,6 +19,7 @@
#
#
import functools
import inspect
import logging
from typing import (
@ -28,15 +29,13 @@ from typing import (
Callable,
Collection,
List,
Literal,
Optional,
Tuple,
Union,
cast,
)
# `Literal` appears with Python 3.8.
from typing_extensions import Literal
import synapse
from synapse.api.errors import Codes
from synapse.logging.opentracing import trace
@ -297,6 +296,7 @@ def load_legacy_spam_checkers(hs: "synapse.server.HomeServer") -> None:
"Bad signature for callback check_registration_for_spam",
)
@functools.wraps(wrapped_func)
def run(*args: Any, **kwargs: Any) -> Awaitable:
# Assertion required because mypy can't prove we won't change `f`
# back to `None`. See

View file

@ -18,9 +18,7 @@
# [This file includes modifications made by New Vector Limited]
#
#
from typing import List, Optional
from typing_extensions import TypedDict
from typing import List, Optional, TypedDict
class EmailReason(TypedDict, total=False):

View file

@ -21,11 +21,10 @@
#
import logging
import random
from typing import TYPE_CHECKING, List, Optional, Tuple
from typing import TYPE_CHECKING, List, Literal, Optional, Tuple
from urllib.parse import urlparse
import attr
from typing_extensions import Literal
from twisted.web.server import Request

View file

@ -20,9 +20,7 @@
#
import logging
from typing import TYPE_CHECKING, List, Optional, Tuple
from typing_extensions import Literal
from typing import TYPE_CHECKING, List, Literal, Optional, Tuple
from twisted.web.server import Request

View file

@ -30,11 +30,10 @@ from typing import (
List,
Optional,
Tuple,
TypedDict,
Union,
)
from typing_extensions import TypedDict
from synapse.api.constants import ApprovalNoticeMedium
from synapse.api.errors import (
Codes,

View file

@ -59,11 +59,13 @@ from synapse.types.state import StateFilter
from synapse.util.async_helpers import Linearizer
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.metrics import Measure, measure_func
from synapse.util.stringutils import shortstr
if TYPE_CHECKING:
from synapse.server import HomeServer
from synapse.storage.controllers import StateStorageController
from synapse.storage.databases.main import DataStore
from synapse.storage.databases.state.deletion import StateDeletionDataStore
logger = logging.getLogger(__name__)
metrics_logger = logging.getLogger("synapse.state.metrics")
@ -194,6 +196,8 @@ class StateHandler:
self._storage_controllers = hs.get_storage_controllers()
self._events_shard_config = hs.config.worker.events_shard_config
self._instance_name = hs.get_instance_name()
self._state_store = hs.get_datastores().state
self._state_deletion_store = hs.get_datastores().state_deletion
self._update_current_state_client = (
ReplicationUpdateCurrentStateRestServlet.make_client(hs)
@ -355,6 +359,28 @@ class StateHandler:
await_full_state=False,
)
# Ensure we still have the state groups we're relying on, and bump
# their usage time to avoid them being deleted from under us.
if entry.state_group:
missing_state_group = await self._state_deletion_store.check_state_groups_and_bump_deletion(
{entry.state_group}
)
if missing_state_group:
raise Exception(f"Missing state group: {entry.state_group}")
elif entry.prev_group:
# We only rely on the prev group when persisting the event if we
# don't have an `entry.state_group`.
missing_state_group = await self._state_deletion_store.check_state_groups_and_bump_deletion(
{entry.prev_group}
)
if missing_state_group:
# If we're missing the prev group then we can just clear the
# entries, and rely on `entry._state` (which must exist if
# `entry.state_group` is None)
entry.prev_group = None
entry.delta_ids = None
state_group_before_event_prev_group = entry.prev_group
deltas_to_state_group_before_event = entry.delta_ids
state_ids_before_event = None
@ -475,7 +501,10 @@ class StateHandler:
@trace
@measure_func()
async def resolve_state_groups_for_events(
self, room_id: str, event_ids: StrCollection, await_full_state: bool = True
self,
room_id: str,
event_ids: StrCollection,
await_full_state: bool = True,
) -> _StateCacheEntry:
"""Given a list of event_ids this method fetches the state at each
event, resolves conflicts between them and returns them.
@ -511,6 +540,7 @@ class StateHandler:
) = await self._state_storage_controller.get_state_group_delta(
state_group_id
)
return _StateCacheEntry(
state=None,
state_group=state_group_id,
@ -531,7 +561,9 @@ class StateHandler:
room_version,
state_to_resolve,
None,
state_res_store=StateResolutionStore(self.store),
state_res_store=StateResolutionStore(
self.store, self._state_deletion_store
),
)
return result
@ -663,7 +695,25 @@ class StateResolutionHandler:
async with self.resolve_linearizer.queue(group_names):
cache = self._state_cache.get(group_names, None)
if cache:
return cache
# Check that the returned cache entry doesn't point to deleted
# state groups.
state_groups_to_check = set()
if cache.state_group is not None:
state_groups_to_check.add(cache.state_group)
if cache.prev_group is not None:
state_groups_to_check.add(cache.prev_group)
missing_state_groups = await state_res_store.state_deletion_store.check_state_groups_and_bump_deletion(
state_groups_to_check
)
if not missing_state_groups:
return cache
else:
# There are missing state groups, so let's remove the stale
# entry and continue as if it was a cache miss.
self._state_cache.pop(group_names, None)
logger.info(
"Resolving state for %s with groups %s",
@ -671,6 +721,16 @@ class StateResolutionHandler:
list(group_names),
)
# We double check that none of the state groups have been deleted.
# They shouldn't be as all these state groups should be referenced.
missing_state_groups = await state_res_store.state_deletion_store.check_state_groups_and_bump_deletion(
group_names
)
if missing_state_groups:
raise Exception(
f"State groups have been deleted: {shortstr(missing_state_groups)}"
)
state_groups_histogram.observe(len(state_groups_ids))
new_state = await self.resolve_events_with_store(
@ -884,7 +944,8 @@ class StateResolutionStore:
in well defined way.
"""
store: "DataStore"
main_store: "DataStore"
state_deletion_store: "StateDeletionDataStore"
def get_events(
self, event_ids: StrCollection, allow_rejected: bool = False
@ -899,7 +960,7 @@ class StateResolutionStore:
An awaitable which resolves to a dict from event_id to event.
"""
return self.store.get_events(
return self.main_store.get_events(
event_ids,
redact_behaviour=EventRedactBehaviour.as_is,
get_prev_content=False,
@ -920,4 +981,4 @@ class StateResolutionStore:
An awaitable that resolves to a set of event IDs.
"""
return self.store.get_auth_chain_difference(room_id, state_sets)
return self.main_store.get_auth_chain_difference(room_id, state_sets)

View file

@ -29,15 +29,15 @@ from typing import (
Generator,
Iterable,
List,
Literal,
Optional,
Protocol,
Sequence,
Set,
Tuple,
overload,
)
from typing_extensions import Literal, Protocol
from synapse import event_auth
from synapse.api.constants import EventTypes
from synapse.api.errors import AuthError

View file

@ -332,6 +332,7 @@ class EventsPersistenceStorageController:
# store for now.
self.main_store = stores.main
self.state_store = stores.state
self._state_deletion_store = stores.state_deletion
assert stores.persist_events
self.persist_events_store = stores.persist_events
@ -549,7 +550,9 @@ class EventsPersistenceStorageController:
room_version,
state_maps_by_state_group,
event_map=None,
state_res_store=StateResolutionStore(self.main_store),
state_res_store=StateResolutionStore(
self.main_store, self._state_deletion_store
),
)
return await res.get_state(self._state_controller, StateFilter.all())
@ -635,15 +638,20 @@ class EventsPersistenceStorageController:
room_id, [e for e, _ in chunk]
)
await self.persist_events_store._persist_events_and_state_updates(
room_id,
chunk,
state_delta_for_room=state_delta_for_room,
new_forward_extremities=new_forward_extremities,
use_negative_stream_ordering=backfilled,
inhibit_local_membership_updates=backfilled,
new_event_links=new_event_links,
)
# Stop the state groups from being deleted while we're persisting
# them.
async with self._state_deletion_store.persisting_state_group_references(
events_and_contexts
):
await self.persist_events_store._persist_events_and_state_updates(
room_id,
chunk,
state_delta_for_room=state_delta_for_room,
new_forward_extremities=new_forward_extremities,
use_negative_stream_ordering=backfilled,
inhibit_local_membership_updates=backfilled,
new_event_links=new_event_links,
)
return replaced_events
@ -965,7 +973,9 @@ class EventsPersistenceStorageController:
room_version,
state_groups,
events_map,
state_res_store=StateResolutionStore(self.main_store),
state_res_store=StateResolutionStore(
self.main_store, self._state_deletion_store
),
)
state_resolutions_during_persistence.inc()

View file

@ -21,9 +21,10 @@
import itertools
import logging
from typing import TYPE_CHECKING, Set
from typing import TYPE_CHECKING, Collection, Mapping, Set
from synapse.logging.context import nested_logging_context
from synapse.metrics.background_process_metrics import wrap_as_background_process
from synapse.storage.databases import Databases
if TYPE_CHECKING:
@ -38,6 +39,11 @@ class PurgeEventsStorageController:
def __init__(self, hs: "HomeServer", stores: Databases):
self.stores = stores
if hs.config.worker.run_background_tasks:
self._delete_state_loop_call = hs.get_clock().looping_call(
self._delete_state_groups_loop, 60 * 1000
)
async def purge_room(self, room_id: str) -> None:
"""Deletes all record of a room"""
@ -68,11 +74,15 @@ class PurgeEventsStorageController:
logger.info("[purge] finding state groups that can be deleted")
sg_to_delete = await self._find_unreferenced_groups(state_groups)
await self.stores.state.purge_unreferenced_state_groups(
room_id, sg_to_delete
# Mark these state groups as pending deletion, they will actually
# get deleted automatically later.
await self.stores.state_deletion.mark_state_groups_as_pending_deletion(
sg_to_delete
)
async def _find_unreferenced_groups(self, state_groups: Set[int]) -> Set[int]:
async def _find_unreferenced_groups(
self, state_groups: Collection[int]
) -> Set[int]:
"""Used when purging history to figure out which state groups can be
deleted.
@ -118,6 +128,78 @@ class PurgeEventsStorageController:
next_to_search |= prevs
state_groups_seen |= prevs
# We also check to see if anything referencing the state groups are
# also unreferenced. This helps ensure that we delete unreferenced
# state groups, if we don't then we will de-delta them when we
# delete the other state groups leading to increased DB usage.
next_edges = await self.stores.state.get_next_state_groups(current_search)
nexts = set(next_edges.keys())
nexts -= state_groups_seen
next_to_search |= nexts
state_groups_seen |= nexts
to_delete = state_groups_seen - referenced_groups
return to_delete
@wrap_as_background_process("_delete_state_groups_loop")
async def _delete_state_groups_loop(self) -> None:
"""Background task that deletes any state groups that may be pending
deletion."""
while True:
next_to_delete = await self.stores.state_deletion.get_next_state_group_collection_to_delete()
if next_to_delete is None:
break
(room_id, groups_to_sequences) = next_to_delete
made_progress = await self._delete_state_groups(
room_id, groups_to_sequences
)
# If no progress was made in deleting the state groups, then we
# break to allow a pause before trying again next time we get
# called.
if not made_progress:
break
async def _delete_state_groups(
self, room_id: str, groups_to_sequences: Mapping[int, int]
) -> bool:
"""Tries to delete the given state groups.
Returns:
Whether we made progress in deleting the state groups (or marking
them as referenced).
"""
# We double check if any of the state groups have become referenced.
# This shouldn't happen, as any usages should cause the state group to
# be removed as pending deletion.
referenced_state_groups = await self.stores.main.get_referenced_state_groups(
groups_to_sequences
)
if referenced_state_groups:
# We mark any state groups that have become referenced as being
# used.
await self.stores.state_deletion.mark_state_groups_as_used(
referenced_state_groups
)
# Update list of state groups to remove referenced ones
groups_to_sequences = {
state_group: sequence_number
for state_group, sequence_number in groups_to_sequences.items()
if state_group not in referenced_state_groups
}
if not groups_to_sequences:
# We made progress here as long as we marked some state groups as
# now referenced.
return len(referenced_state_groups) > 0
return await self.stores.state.purge_unreferenced_state_groups(
room_id,
groups_to_sequences,
)

View file

@ -35,6 +35,7 @@ from typing import (
Iterable,
Iterator,
List,
Literal,
Mapping,
Optional,
Sequence,
@ -47,7 +48,7 @@ from typing import (
import attr
from prometheus_client import Counter, Histogram
from typing_extensions import Concatenate, Literal, ParamSpec
from typing_extensions import Concatenate, ParamSpec
from twisted.enterprise import adbapi
from twisted.internet.interfaces import IReactorCore
@ -2159,10 +2160,26 @@ class DatabasePool:
if rowcount > 1:
raise StoreError(500, "More than one row matched (%s)" % (table,))
# Ideally we could use the overload decorator here to specify that the
# return type is only optional if allow_none is True, but this does not work
# when you call a static method from an instance.
# See https://github.com/python/mypy/issues/7781
@overload
@staticmethod
def simple_select_one_txn(
txn: LoggingTransaction,
table: str,
keyvalues: Dict[str, Any],
retcols: Collection[str],
allow_none: Literal[False] = False,
) -> Tuple[Any, ...]: ...
@overload
@staticmethod
def simple_select_one_txn(
txn: LoggingTransaction,
table: str,
keyvalues: Dict[str, Any],
retcols: Collection[str],
allow_none: Literal[True] = True,
) -> Optional[Tuple[Any, ...]]: ...
@staticmethod
def simple_select_one_txn(
txn: LoggingTransaction,

View file

@ -26,6 +26,7 @@ from synapse.storage._base import SQLBaseStore
from synapse.storage.database import DatabasePool, make_conn
from synapse.storage.databases.main.events import PersistEventsStore
from synapse.storage.databases.state import StateGroupDataStore
from synapse.storage.databases.state.deletion import StateDeletionDataStore
from synapse.storage.engines import create_engine
from synapse.storage.prepare_database import prepare_database
@ -49,12 +50,14 @@ class Databases(Generic[DataStoreT]):
main
state
persist_events
state_deletion
"""
databases: List[DatabasePool]
main: "DataStore" # FIXME: https://github.com/matrix-org/synapse/issues/11165: actually an instance of `main_store_class`
state: StateGroupDataStore
persist_events: Optional[PersistEventsStore]
state_deletion: StateDeletionDataStore
def __init__(self, main_store_class: Type[DataStoreT], hs: "HomeServer"):
# Note we pass in the main store class here as workers use a different main
@ -63,6 +66,7 @@ class Databases(Generic[DataStoreT]):
self.databases = []
main: Optional[DataStoreT] = None
state: Optional[StateGroupDataStore] = None
state_deletion: Optional[StateDeletionDataStore] = None
persist_events: Optional[PersistEventsStore] = None
for database_config in hs.config.database.databases:
@ -114,7 +118,8 @@ class Databases(Generic[DataStoreT]):
if state:
raise Exception("'state' data store already configured")
state = StateGroupDataStore(database, db_conn, hs)
state_deletion = StateDeletionDataStore(database, db_conn, hs)
state = StateGroupDataStore(database, db_conn, hs, state_deletion)
db_conn.commit()
@ -135,7 +140,7 @@ class Databases(Generic[DataStoreT]):
if not main:
raise Exception("No 'main' database configured")
if not state:
if not state or not state_deletion:
raise Exception("No 'state' database configured")
# We use local variables here to ensure that the databases do not have
@ -143,3 +148,4 @@ class Databases(Generic[DataStoreT]):
self.main = main # type: ignore[assignment]
self.state = state
self.persist_events = persist_events
self.state_deletion = state_deletion

View file

@ -20,10 +20,19 @@
#
import logging
from typing import TYPE_CHECKING, Dict, List, Mapping, Optional, Tuple, Union, cast
from typing import (
TYPE_CHECKING,
Dict,
List,
Mapping,
Optional,
Tuple,
TypedDict,
Union,
cast,
)
import attr
from typing_extensions import TypedDict
from synapse.metrics.background_process_metrics import wrap_as_background_process
from synapse.storage._base import SQLBaseStore

View file

@ -27,6 +27,7 @@ from typing import (
Dict,
Iterable,
List,
Literal,
Mapping,
Optional,
Set,
@ -35,7 +36,6 @@ from typing import (
)
from canonicaljson import encode_canonical_json
from typing_extensions import Literal
from synapse.api.constants import EduTypes
from synapse.api.errors import Codes, StoreError

View file

@ -19,9 +19,18 @@
#
#
from typing import TYPE_CHECKING, Dict, Iterable, List, Mapping, Optional, Tuple, cast
from typing_extensions import Literal, TypedDict
from typing import (
TYPE_CHECKING,
Dict,
Iterable,
List,
Literal,
Mapping,
Optional,
Tuple,
TypedDict,
cast,
)
from synapse.api.errors import StoreError
from synapse.logging.opentracing import log_kv, trace
@ -510,19 +519,16 @@ class EndToEndRoomKeyStore(EndToEndRoomKeyBackgroundStore):
# it isn't there.
raise StoreError(404, "No backup with that version exists")
row = cast(
Tuple[int, str, str, Optional[int]],
self.db_pool.simple_select_one_txn(
txn,
table="e2e_room_keys_versions",
keyvalues={
"user_id": user_id,
"version": this_version,
"deleted": 0,
},
retcols=("version", "algorithm", "auth_data", "etag"),
allow_none=False,
),
row = self.db_pool.simple_select_one_txn(
txn,
table="e2e_room_keys_versions",
keyvalues={
"user_id": user_id,
"version": this_version,
"deleted": 0,
},
retcols=("version", "algorithm", "auth_data", "etag"),
allow_none=False,
)
return {
"auth_data": db_to_json(row[2]),

View file

@ -27,6 +27,7 @@ from typing import (
Dict,
Iterable,
List,
Literal,
Mapping,
Optional,
Sequence,
@ -39,7 +40,6 @@ from typing import (
import attr
from canonicaljson import encode_canonical_json
from typing_extensions import Literal
from synapse.api.constants import DeviceKeyAlgorithms
from synapse.appservice import (

View file

@ -35,12 +35,12 @@ from typing import (
Sequence,
Set,
Tuple,
TypedDict,
cast,
)
import attr
from prometheus_client import Counter
from typing_extensions import TypedDict
import synapse.metrics
from synapse.api.constants import (

View file

@ -30,6 +30,7 @@ from typing import (
Dict,
Iterable,
List,
Literal,
Mapping,
MutableMapping,
Optional,
@ -41,7 +42,6 @@ from typing import (
import attr
from prometheus_client import Gauge
from typing_extensions import Literal
from twisted.internet import defer

View file

@ -1510,15 +1510,14 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore):
# Override type because the return type is only optional if
# allow_none is True, and we don't want mypy throwing errors
# about None not being indexable.
pending, completed = cast(
Tuple[int, int],
self.db_pool.simple_select_one_txn(
txn,
"registration_tokens",
keyvalues={"token": token},
retcols=["pending", "completed"],
),
row = self.db_pool.simple_select_one_txn(
txn,
"registration_tokens",
keyvalues={"token": token},
retcols=("pending", "completed"),
)
pending = int(row[0])
completed = int(row[1])
# Decrement pending and increment completed
self.db_pool.simple_update_one_txn(

View file

@ -50,6 +50,7 @@ from typing import (
Dict,
Iterable,
List,
Literal,
Mapping,
Optional,
Protocol,
@ -61,7 +62,7 @@ from typing import (
import attr
from immutabledict import immutabledict
from typing_extensions import Literal, assert_never
from typing_extensions import assert_never
from twisted.internet import defer
@ -1837,15 +1838,14 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
dict
"""
stream_ordering, topological_ordering = cast(
Tuple[int, int],
self.db_pool.simple_select_one_txn(
txn,
"events",
keyvalues={"event_id": event_id, "room_id": room_id},
retcols=["stream_ordering", "topological_ordering"],
),
row = self.db_pool.simple_select_one_txn(
txn,
"events",
keyvalues={"event_id": event_id, "room_id": room_id},
retcols=("stream_ordering", "topological_ordering"),
)
stream_ordering = int(row[0])
topological_ordering = int(row[1])
# Paginating backwards includes the event at the token, but paginating
# forward doesn't.

View file

@ -31,6 +31,7 @@ from typing import (
Sequence,
Set,
Tuple,
TypedDict,
cast,
)
@ -44,8 +45,6 @@ try:
except ModuleNotFoundError:
USE_ICU = False
from typing_extensions import TypedDict
from synapse.api.errors import StoreError
from synapse.util.stringutils import non_null_str_or_none

View file

@ -0,0 +1,537 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2025 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
import contextlib
from typing import (
TYPE_CHECKING,
AbstractSet,
AsyncIterator,
Collection,
Mapping,
Optional,
Set,
Tuple,
)
from synapse.events import EventBase
from synapse.events.snapshot import EventContext
from synapse.storage.database import (
DatabasePool,
LoggingDatabaseConnection,
LoggingTransaction,
make_in_list_sql_clause,
)
from synapse.storage.engines import PostgresEngine
from synapse.util.stringutils import shortstr
if TYPE_CHECKING:
from synapse.server import HomeServer
class StateDeletionDataStore:
"""Manages deletion of state groups in a safe manner.
Deleting state groups is challenging as before we actually delete them we
need to ensure that there are no in-flight events that refer to the state
groups that we want to delete.
To handle this, we take two approaches. First, before we persist any event
we ensure that the state group still exists and mark in the
`state_groups_persisting` table that the state group is about to be used.
(Note that we have to have the extra table here as state groups and events
can be in different databases, and thus we can't check for the existence of
state groups in the persist event transaction). Once the event has been
persisted, we can remove the row from `state_groups_persisting`. So long as
we check that table before deleting state groups, we can ensure that we
never persist events that reference deleted state groups, maintaining
database integrity.
However, we want to avoid throwing exceptions so deep in the process of
persisting events. So instead of deleting state groups immediately, we mark
them as pending/proposed for deletion and wait for a certain amount of time
before performing the deletion. When we come to handle new events that
reference state groups, we check if they are pending deletion and bump the
time for when they'll be deleted (to give a chance for the event to be
persisted, or not).
When deleting, we need to check that state groups remain unreferenced. There
is a race here where we a) fetch state groups that are ready for deletion,
b) check they're unreferenced, c) the state group becomes referenced but
then gets marked as pending deletion again, d) during the deletion
transaction we recheck `state_groups_pending_deletion` table again and see
that it exists and so continue with the deletion. To prevent this from
happening we add a `sequence_number` column to
`state_groups_pending_deletion`, and during deletion we ensure that for a
state group we're about to delete that the sequence number doesn't change
between steps (a) and (d). So long as we always bump the sequence number
whenever an event may become used the race can never happen.
"""
# How long to wait before we delete state groups. This should be long enough
# for any in-flight events to be persisted. If events take longer to persist
# and any of the state groups they reference have been deleted, then the
# event will fail to persist (as well as any event in the same batch).
DELAY_BEFORE_DELETION_MS = 10 * 60 * 1000
def __init__(
self,
database: DatabasePool,
db_conn: LoggingDatabaseConnection,
hs: "HomeServer",
):
self._clock = hs.get_clock()
self.db_pool = database
self._instance_name = hs.get_instance_name()
with db_conn.cursor(txn_name="_clear_existing_persising") as txn:
self._clear_existing_persising(txn)
def _clear_existing_persising(self, txn: LoggingTransaction) -> None:
"""On startup we clear any entries in `state_groups_persisting` that
match our instance name, in case of a previous unclean shutdown"""
self.db_pool.simple_delete_txn(
txn,
table="state_groups_persisting",
keyvalues={"instance_name": self._instance_name},
)
async def check_state_groups_and_bump_deletion(
self, state_groups: AbstractSet[int]
) -> Collection[int]:
"""Checks to make sure that the state groups haven't been deleted, and
if they're pending deletion we delay it (allowing time for any event
that will use them to finish persisting).
Returns:
The state groups that are missing, if any.
"""
return await self.db_pool.runInteraction(
"check_state_groups_and_bump_deletion",
self._check_state_groups_and_bump_deletion_txn,
state_groups,
# We don't need to lock if we're just doing a quick check, as the
# lock doesn't prevent any races here.
lock=False,
)
def _check_state_groups_and_bump_deletion_txn(
self, txn: LoggingTransaction, state_groups: AbstractSet[int], lock: bool = True
) -> Collection[int]:
"""Checks to make sure that the state groups haven't been deleted, and
if they're pending deletion we delay it (allowing time for any event
that will use them to finish persisting).
The `lock` flag sets if we should lock the `state_group` rows we're
checking, which we should do when storing new groups.
Returns:
The state groups that are missing, if any.
"""
existing_state_groups = self._get_existing_groups_with_lock(
txn, state_groups, lock=lock
)
self._bump_deletion_txn(txn, existing_state_groups)
missing_state_groups = state_groups - existing_state_groups
if missing_state_groups:
return missing_state_groups
return ()
def _bump_deletion_txn(
self, txn: LoggingTransaction, state_groups: Collection[int]
) -> None:
"""Update any pending deletions of the state group that they may now be
referenced."""
if not state_groups:
return
now = self._clock.time_msec()
if isinstance(self.db_pool.engine, PostgresEngine):
clause, args = make_in_list_sql_clause(
self.db_pool.engine, "state_group", state_groups
)
sql = f"""
UPDATE state_groups_pending_deletion
SET sequence_number = DEFAULT, insertion_ts = ?
WHERE {clause}
"""
args.insert(0, now)
txn.execute(sql, args)
else:
rows = self.db_pool.simple_select_many_txn(
txn,
table="state_groups_pending_deletion",
column="state_group",
iterable=state_groups,
keyvalues={},
retcols=("state_group",),
)
if not rows:
return
state_groups_to_update = [state_group for (state_group,) in rows]
self.db_pool.simple_delete_many_txn(
txn,
table="state_groups_pending_deletion",
column="state_group",
values=state_groups_to_update,
keyvalues={},
)
self.db_pool.simple_insert_many_txn(
txn,
table="state_groups_pending_deletion",
keys=("state_group", "insertion_ts"),
values=[(state_group, now) for state_group in state_groups_to_update],
)
def _get_existing_groups_with_lock(
self, txn: LoggingTransaction, state_groups: Collection[int], lock: bool = True
) -> AbstractSet[int]:
"""Return which of the given state groups are in the database, and locks
those rows with `KEY SHARE` to ensure they don't get concurrently
deleted (if `lock` is true)."""
clause, args = make_in_list_sql_clause(self.db_pool.engine, "id", state_groups)
sql = f"""
SELECT id FROM state_groups
WHERE {clause}
"""
if lock and isinstance(self.db_pool.engine, PostgresEngine):
# On postgres we add a row level lock to the rows to ensure that we
# conflict with any concurrent DELETEs. `FOR KEY SHARE` lock will
# not conflict with other read
sql += """
FOR KEY SHARE
"""
txn.execute(sql, args)
return {state_group for (state_group,) in txn}
@contextlib.asynccontextmanager
async def persisting_state_group_references(
self, event_and_contexts: Collection[Tuple[EventBase, EventContext]]
) -> AsyncIterator[None]:
"""Wraps the persistence of the given events and contexts, ensuring that
any state groups referenced still exist and that they don't get deleted
during this."""
referenced_state_groups: Set[int] = set()
for event, ctx in event_and_contexts:
if ctx.rejected or event.internal_metadata.is_outlier():
continue
assert ctx.state_group is not None
referenced_state_groups.add(ctx.state_group)
if ctx.state_group_before_event:
referenced_state_groups.add(ctx.state_group_before_event)
if not referenced_state_groups:
# We don't reference any state groups, so nothing to do
yield
return
await self.db_pool.runInteraction(
"mark_state_groups_as_persisting",
self._mark_state_groups_as_persisting_txn,
referenced_state_groups,
)
error = True
try:
yield None
error = False
finally:
await self.db_pool.runInteraction(
"finish_persisting",
self._finish_persisting_txn,
referenced_state_groups,
error=error,
)
def _mark_state_groups_as_persisting_txn(
self, txn: LoggingTransaction, state_groups: Set[int]
) -> None:
"""Marks the given state groups as being persisted."""
existing_state_groups = self._get_existing_groups_with_lock(txn, state_groups)
missing_state_groups = state_groups - existing_state_groups
if missing_state_groups:
raise Exception(
f"state groups have been deleted: {shortstr(missing_state_groups)}"
)
self.db_pool.simple_insert_many_txn(
txn,
table="state_groups_persisting",
keys=("state_group", "instance_name"),
values=[(state_group, self._instance_name) for state_group in state_groups],
)
def _finish_persisting_txn(
self, txn: LoggingTransaction, state_groups: Collection[int], error: bool
) -> None:
"""Mark the state groups as having finished persistence.
If `error` is true then we assume the state groups were not persisted,
and so we do not clear them from the pending deletion table.
"""
self.db_pool.simple_delete_many_txn(
txn,
table="state_groups_persisting",
column="state_group",
values=state_groups,
keyvalues={"instance_name": self._instance_name},
)
if error:
# The state groups may or may not have been persisted, so we need to
# bump the deletion to ensure we recheck if they have become
# referenced.
self._bump_deletion_txn(txn, state_groups)
return
self.db_pool.simple_delete_many_batch_txn(
txn,
table="state_groups_pending_deletion",
keys=("state_group",),
values=[(state_group,) for state_group in state_groups],
)
async def mark_state_groups_as_pending_deletion(
self, state_groups: Collection[int]
) -> None:
"""Mark the given state groups as pending deletion"""
now = self._clock.time_msec()
await self.db_pool.simple_upsert_many(
table="state_groups_pending_deletion",
key_names=("state_group",),
key_values=[(state_group,) for state_group in state_groups],
value_names=("insertion_ts",),
value_values=[(now,) for _ in state_groups],
desc="mark_state_groups_as_pending_deletion",
)
async def mark_state_groups_as_used(self, state_groups: Collection[int]) -> None:
"""Mark the given state groups as now being referenced"""
await self.db_pool.simple_delete_many(
table="state_groups_pending_deletion",
column="state_group",
iterable=state_groups,
keyvalues={},
desc="mark_state_groups_as_used",
)
async def get_pending_deletions(
self, state_groups: Collection[int]
) -> Mapping[int, int]:
"""Get which state groups are pending deletion.
Returns:
a mapping from state groups that are pending deletion to their
sequence number
"""
rows = await self.db_pool.simple_select_many_batch(
table="state_groups_pending_deletion",
column="state_group",
iterable=state_groups,
retcols=("state_group", "sequence_number"),
keyvalues={},
desc="get_pending_deletions",
)
return dict(rows)
def get_state_groups_ready_for_potential_deletion_txn(
self,
txn: LoggingTransaction,
state_groups_to_sequence_numbers: Mapping[int, int],
) -> Collection[int]:
"""Given a set of state groups, return which state groups can
potentially be deleted.
The state groups must have been checked to see if they remain
unreferenced before calling this function.
Note: This must be called within the same transaction that the state
groups are deleted.
Args:
state_groups_to_sequence_numbers: The state groups, and the sequence
numbers from before the state groups were checked to see if they
were unreferenced.
Returns:
The subset of state groups that can safely be deleted
"""
if not state_groups_to_sequence_numbers:
return state_groups_to_sequence_numbers
if isinstance(self.db_pool.engine, PostgresEngine):
# On postgres we want to lock the rows FOR UPDATE as early as
# possible to help conflicts.
clause, args = make_in_list_sql_clause(
self.db_pool.engine, "id", state_groups_to_sequence_numbers
)
sql = f"""
SELECT id FROM state_groups
WHERE {clause}
FOR UPDATE
"""
txn.execute(sql, args)
# Check the deletion status in the DB of the given state groups
clause, args = make_in_list_sql_clause(
self.db_pool.engine,
column="state_group",
iterable=state_groups_to_sequence_numbers,
)
sql = f"""
SELECT state_group, insertion_ts, sequence_number FROM (
SELECT state_group, insertion_ts, sequence_number FROM state_groups_pending_deletion
UNION
SELECT state_group, null, null FROM state_groups_persisting
) AS s
WHERE {clause}
"""
txn.execute(sql, args)
# The above query will return potentially two rows per state group (one
# for each table), so we track which state groups have enough time
# elapsed and which are not ready to be persisted.
ready_to_be_deleted = set()
not_ready_to_be_deleted = set()
now = self._clock.time_msec()
for state_group, insertion_ts, sequence_number in txn:
if insertion_ts is None:
# A null insertion_ts means that we are currently persisting
# events that reference the state group, so we don't delete
# them.
not_ready_to_be_deleted.add(state_group)
continue
# We know this can't be None if insertion_ts is not None
assert sequence_number is not None
# Check if the sequence number has changed, if it has then it
# indicates that the state group may have become referenced since we
# checked.
if state_groups_to_sequence_numbers[state_group] != sequence_number:
not_ready_to_be_deleted.add(state_group)
continue
if now - insertion_ts < self.DELAY_BEFORE_DELETION_MS:
# Not enough time has elapsed to allow us to delete.
not_ready_to_be_deleted.add(state_group)
continue
ready_to_be_deleted.add(state_group)
can_be_deleted = ready_to_be_deleted - not_ready_to_be_deleted
if not_ready_to_be_deleted:
# If there are any state groups that aren't ready to be deleted,
# then we also need to remove any state groups that are referenced
# by them.
clause, args = make_in_list_sql_clause(
self.db_pool.engine,
column="state_group",
iterable=state_groups_to_sequence_numbers,
)
sql = f"""
WITH RECURSIVE ancestors(state_group) AS (
SELECT DISTINCT prev_state_group
FROM state_group_edges WHERE {clause}
UNION
SELECT prev_state_group
FROM state_group_edges
INNER JOIN ancestors USING (state_group)
)
SELECT state_group FROM ancestors
"""
txn.execute(sql, args)
can_be_deleted.difference_update(state_group for (state_group,) in txn)
return can_be_deleted
async def get_next_state_group_collection_to_delete(
self,
) -> Optional[Tuple[str, Mapping[int, int]]]:
"""Get the next set of state groups to try and delete
Returns:
2-tuple of room_id and mapping of state groups to sequence number.
"""
return await self.db_pool.runInteraction(
"get_next_state_group_collection_to_delete",
self._get_next_state_group_collection_to_delete_txn,
)
def _get_next_state_group_collection_to_delete_txn(
self,
txn: LoggingTransaction,
) -> Optional[Tuple[str, Mapping[int, int]]]:
"""Implementation of `get_next_state_group_collection_to_delete`"""
# We want to return chunks of state groups that were marked for deletion
# at the same time (this isn't necessary, just more efficient). We do
# this by looking for the oldest insertion_ts, and then pulling out all
# rows that have the same insertion_ts (and room ID).
now = self._clock.time_msec()
sql = """
SELECT room_id, insertion_ts
FROM state_groups_pending_deletion AS sd
INNER JOIN state_groups AS sg ON (id = sd.state_group)
LEFT JOIN state_groups_persisting AS sp USING (state_group)
WHERE insertion_ts < ? AND sp.state_group IS NULL
ORDER BY insertion_ts
LIMIT 1
"""
txn.execute(sql, (now - self.DELAY_BEFORE_DELETION_MS,))
row = txn.fetchone()
if not row:
return None
(room_id, insertion_ts) = row
sql = """
SELECT state_group, sequence_number
FROM state_groups_pending_deletion AS sd
INNER JOIN state_groups AS sg ON (id = sd.state_group)
LEFT JOIN state_groups_persisting AS sp USING (state_group)
WHERE room_id = ? AND insertion_ts = ? AND sp.state_group IS NULL
ORDER BY insertion_ts
"""
txn.execute(sql, (room_id, insertion_ts))
return room_id, dict(txn)

View file

@ -22,10 +22,10 @@
import logging
from typing import (
TYPE_CHECKING,
Collection,
Dict,
Iterable,
List,
Mapping,
Optional,
Set,
Tuple,
@ -36,7 +36,10 @@ import attr
from synapse.api.constants import EventTypes
from synapse.events import EventBase
from synapse.events.snapshot import UnpersistedEventContext, UnpersistedEventContextBase
from synapse.events.snapshot import (
UnpersistedEventContext,
UnpersistedEventContextBase,
)
from synapse.logging.opentracing import tag_args, trace
from synapse.storage._base import SQLBaseStore
from synapse.storage.database import (
@ -55,6 +58,7 @@ from synapse.util.cancellation import cancellable
if TYPE_CHECKING:
from synapse.server import HomeServer
from synapse.storage.databases.state.deletion import StateDeletionDataStore
logger = logging.getLogger(__name__)
@ -83,8 +87,10 @@ class StateGroupDataStore(StateBackgroundUpdateStore, SQLBaseStore):
database: DatabasePool,
db_conn: LoggingDatabaseConnection,
hs: "HomeServer",
state_deletion_store: "StateDeletionDataStore",
):
super().__init__(database, db_conn, hs)
self._state_deletion_store = state_deletion_store
# Originally the state store used a single DictionaryCache to cache the
# event IDs for the state types in a given state group to avoid hammering
@ -467,14 +473,15 @@ class StateGroupDataStore(StateBackgroundUpdateStore, SQLBaseStore):
Returns:
A list of state groups
"""
is_in_db = self.db_pool.simple_select_one_onecol_txn(
txn,
table="state_groups",
keyvalues={"id": prev_group},
retcol="id",
allow_none=True,
# We need to check that the prev group isn't about to be deleted
is_missing = (
self._state_deletion_store._check_state_groups_and_bump_deletion_txn(
txn,
{prev_group},
)
)
if not is_in_db:
if is_missing:
raise Exception(
"Trying to persist state with unpersisted prev_group: %r"
% (prev_group,)
@ -546,6 +553,7 @@ class StateGroupDataStore(StateBackgroundUpdateStore, SQLBaseStore):
for key, state_id in context.state_delta_due_to_event.items()
],
)
return events_and_context
return await self.db_pool.runInteraction(
@ -601,14 +609,15 @@ class StateGroupDataStore(StateBackgroundUpdateStore, SQLBaseStore):
The state group if successfully created, or None if the state
needs to be persisted as a full state.
"""
is_in_db = self.db_pool.simple_select_one_onecol_txn(
txn,
table="state_groups",
keyvalues={"id": prev_group},
retcol="id",
allow_none=True,
# We need to check that the prev group isn't about to be deleted
is_missing = (
self._state_deletion_store._check_state_groups_and_bump_deletion_txn(
txn,
{prev_group},
)
)
if not is_in_db:
if is_missing:
raise Exception(
"Trying to persist state with unpersisted prev_group: %r"
% (prev_group,)
@ -726,8 +735,10 @@ class StateGroupDataStore(StateBackgroundUpdateStore, SQLBaseStore):
)
async def purge_unreferenced_state_groups(
self, room_id: str, state_groups_to_delete: Collection[int]
) -> None:
self,
room_id: str,
state_groups_to_sequence_numbers: Mapping[int, int],
) -> bool:
"""Deletes no longer referenced state groups and de-deltas any state
groups that reference them.
@ -735,21 +746,31 @@ class StateGroupDataStore(StateBackgroundUpdateStore, SQLBaseStore):
room_id: The room the state groups belong to (must all be in the
same room).
state_groups_to_delete: Set of all state groups to delete.
Returns:
Whether any state groups were actually deleted.
"""
await self.db_pool.runInteraction(
return await self.db_pool.runInteraction(
"purge_unreferenced_state_groups",
self._purge_unreferenced_state_groups,
room_id,
state_groups_to_delete,
state_groups_to_sequence_numbers,
)
def _purge_unreferenced_state_groups(
self,
txn: LoggingTransaction,
room_id: str,
state_groups_to_delete: Collection[int],
) -> None:
state_groups_to_sequence_numbers: Mapping[int, int],
) -> bool:
state_groups_to_delete = self._state_deletion_store.get_state_groups_ready_for_potential_deletion_txn(
txn, state_groups_to_sequence_numbers
)
if not state_groups_to_delete:
return False
logger.info(
"[purge] found %i state groups to delete", len(state_groups_to_delete)
)
@ -812,6 +833,8 @@ class StateGroupDataStore(StateBackgroundUpdateStore, SQLBaseStore):
[(sg,) for sg in state_groups_to_delete],
)
return True
@trace
@tag_args
async def get_previous_state_groups(
@ -830,7 +853,7 @@ class StateGroupDataStore(StateBackgroundUpdateStore, SQLBaseStore):
List[Tuple[int, int]],
await self.db_pool.simple_select_many_batch(
table="state_group_edges",
column="prev_state_group",
column="state_group",
iterable=state_groups,
keyvalues={},
retcols=("state_group", "prev_state_group"),
@ -840,6 +863,35 @@ class StateGroupDataStore(StateBackgroundUpdateStore, SQLBaseStore):
return dict(rows)
@trace
@tag_args
async def get_next_state_groups(
self, state_groups: Iterable[int]
) -> Dict[int, int]:
"""Fetch the groups that have the given state groups as their previous
state groups.
Args:
state_groups
Returns:
A mapping from state group to previous state group.
"""
rows = cast(
List[Tuple[int, int]],
await self.db_pool.simple_select_many_batch(
table="state_group_edges",
column="prev_state_group",
iterable=state_groups,
keyvalues={},
retcols=("state_group", "prev_state_group"),
desc="get_next_state_groups",
),
)
return dict(rows)
async def purge_room_state(self, room_id: str) -> None:
return await self.db_pool.runInteraction(
"purge_room_state",

View file

@ -19,7 +19,7 @@
#
#
SCHEMA_VERSION = 88 # remember to update the list below when updating
SCHEMA_VERSION = 89 # remember to update the list below when updating
"""Represents the expectations made by the codebase about the database schema
This should be incremented whenever the codebase changes its requirements on the
@ -155,6 +155,9 @@ Changes in SCHEMA_VERSION = 88
be posted in response to a resettable timeout or an on-demand action.
- Add background update to fix data integrity issue in the
`sliding_sync_membership_snapshots` -> `forgotten` column
Changes in SCHEMA_VERSION = 89
- Add `state_groups_pending_deletion` and `state_groups_persisting` tables.
"""

View file

@ -0,0 +1,39 @@
--
-- This file is licensed under the Affero General Public License (AGPL) version 3.
--
-- Copyright (C) 2025 New Vector, Ltd
--
-- This program is free software: you can redistribute it and/or modify
-- it under the terms of the GNU Affero General Public License as
-- published by the Free Software Foundation, either version 3 of the
-- License, or (at your option) any later version.
--
-- See the GNU Affero General Public License for more details:
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
-- See the `StateDeletionDataStore` for details of these tables.
-- We add state groups to this table when we want to later delete them. The
-- `insertion_ts` column indicates when the state group was proposed for
-- deletion (rather than when it should be deleted).
CREATE TABLE IF NOT EXISTS state_groups_pending_deletion (
sequence_number $%AUTO_INCREMENT_PRIMARY_KEY%$,
state_group BIGINT NOT NULL,
insertion_ts BIGINT NOT NULL
);
CREATE UNIQUE INDEX state_groups_pending_deletion_state_group ON state_groups_pending_deletion(state_group);
CREATE INDEX state_groups_pending_deletion_insertion_ts ON state_groups_pending_deletion(insertion_ts);
-- Holds the state groups the worker is currently persisting.
--
-- The `sequence_number` column of the `state_groups_pending_deletion` table
-- *must* be updated whenever a state group may have become referenced.
CREATE TABLE IF NOT EXISTS state_groups_persisting (
state_group BIGINT NOT NULL,
instance_name TEXT NOT NULL,
PRIMARY KEY (state_group, instance_name)
);
CREATE INDEX state_groups_persisting_instance_name ON state_groups_persisting(instance_name);

View file

@ -26,14 +26,13 @@ from typing import (
List,
Mapping,
Optional,
Protocol,
Sequence,
Tuple,
Type,
Union,
)
from typing_extensions import Protocol
"""
Some very basic protocol definitions for the DB-API2 classes specified in PEP-249
"""

View file

@ -40,6 +40,7 @@ from typing import (
Set,
Tuple,
Type,
TypedDict,
TypeVar,
Union,
overload,
@ -49,7 +50,7 @@ import attr
from immutabledict import immutabledict
from signedjson.key import decode_verify_key_bytes
from signedjson.types import VerifyKey
from typing_extensions import Self, TypedDict
from typing_extensions import Self
from unpaddedbase64 import decode_base64
from zope.interface import Interface
@ -664,6 +665,11 @@ class RoomStreamToken(AbstractMultiWriterStreamToken):
@classmethod
async def parse(cls, store: "PurgeEventsStore", string: str) -> "RoomStreamToken":
# Check that it looks like a Synapse token first. We do this so that
# we don't log at the exception-level for obviously incorrect tokens.
if not string or string[0] not in ("s", "t", "m"):
raise SynapseError(400, f"Invalid room stream token {string:!r}")
try:
if string[0] == "s":
return cls(topological=None, stream=int(string[1:]))

View file

@ -41,6 +41,7 @@ from typing import (
Hashable,
Iterable,
List,
Literal,
Optional,
Set,
Tuple,
@ -51,7 +52,7 @@ from typing import (
)
import attr
from typing_extensions import Concatenate, Literal, ParamSpec, Unpack
from typing_extensions import Concatenate, ParamSpec, Unpack
from twisted.internet import defer
from twisted.internet.defer import CancelledError

View file

@ -21,10 +21,19 @@
import enum
import logging
import threading
from typing import Dict, Generic, Iterable, Optional, Set, Tuple, TypeVar, Union
from typing import (
Dict,
Generic,
Iterable,
Literal,
Optional,
Set,
Tuple,
TypeVar,
Union,
)
import attr
from typing_extensions import Literal
from synapse.util.caches.lrucache import LruCache
from synapse.util.caches.treecache import TreeCache

View file

@ -21,10 +21,9 @@
import logging
from collections import OrderedDict
from typing import Any, Generic, Iterable, Optional, TypeVar, Union, overload
from typing import Any, Generic, Iterable, Literal, Optional, TypeVar, Union, overload
import attr
from typing_extensions import Literal
from twisted.internet import defer

View file

@ -34,6 +34,7 @@ from typing import (
Generic,
Iterable,
List,
Literal,
Optional,
Set,
Tuple,
@ -44,8 +45,6 @@ from typing import (
overload,
)
from typing_extensions import Literal
from twisted.internet import reactor
from twisted.internet.interfaces import IReactorTime

View file

@ -30,14 +30,13 @@ from typing import (
Iterator,
List,
Mapping,
Protocol,
Set,
Sized,
Tuple,
TypeVar,
)
from typing_extensions import Protocol
T = TypeVar("T")
S = TypeVar("S", bound="_SelfSlice")

View file

@ -22,12 +22,11 @@
"""Utilities for manipulating macaroons"""
from typing import Callable, Optional
from typing import Callable, Literal, Optional
import attr
import pymacaroons
from pymacaroons.exceptions import MacaroonVerificationFailedException
from typing_extensions import Literal
from synapse.util import Clock, stringutils

View file

@ -22,10 +22,19 @@
import logging
from functools import wraps
from types import TracebackType
from typing import Awaitable, Callable, Dict, Generator, Optional, Type, TypeVar
from typing import (
Awaitable,
Callable,
Dict,
Generator,
Optional,
Protocol,
Type,
TypeVar,
)
from prometheus_client import CollectorRegistry, Counter, Metric
from typing_extensions import Concatenate, ParamSpec, Protocol
from typing_extensions import Concatenate, ParamSpec
from synapse.logging.context import (
ContextResourceUsage,

View file

@ -132,6 +132,8 @@ class ConfigLoadingFileTestCase(ConfigFileTestCase):
"turn_shared_secret_path: /does/not/exist",
"registration_shared_secret_path: /does/not/exist",
"macaroon_secret_key_path: /does/not/exist",
"experimental_features:\n msc3861:\n client_secret_path: /does/not/exist",
"experimental_features:\n msc3861:\n admin_token_path: /does/not/exist",
*["redis:\n enabled: true\n password_path: /does/not/exist"]
* (hiredis is not None),
]
@ -157,6 +159,14 @@ class ConfigLoadingFileTestCase(ConfigFileTestCase):
"macaroon_secret_key_path: {}",
lambda c: c.key.macaroon_secret_key,
),
(
"experimental_features:\n msc3861:\n client_secret_path: {}",
lambda c: c.experimental.msc3861.client_secret().encode("utf-8"),
),
(
"experimental_features:\n msc3861:\n admin_token_path: {}",
lambda c: c.experimental.msc3861.admin_token().encode("utf-8"),
),
*[
(
"redis:\n enabled: true\n password_path: {}",

View file

@ -807,6 +807,7 @@ class FederationEventHandlerTests(unittest.FederatingHomeserverTestCase):
OTHER_USER = f"@user:{self.OTHER_SERVER_NAME}"
main_store = self.hs.get_datastores().main
state_deletion_store = self.hs.get_datastores().state_deletion
# Create the room.
kermit_user_id = self.register_user("kermit", "test")
@ -958,7 +959,9 @@ class FederationEventHandlerTests(unittest.FederatingHomeserverTestCase):
bert_member_event.event_id: bert_member_event,
rejected_kick_event.event_id: rejected_kick_event,
},
state_res_store=StateResolutionStore(main_store),
state_res_store=StateResolutionStore(
main_store, state_deletion_store
),
)
),
[bert_member_event.event_id, rejected_kick_event.event_id],
@ -1003,7 +1006,9 @@ class FederationEventHandlerTests(unittest.FederatingHomeserverTestCase):
rejected_power_levels_event.event_id,
],
event_map={},
state_res_store=StateResolutionStore(main_store),
state_res_store=StateResolutionStore(
main_store, state_deletion_store
),
full_conflicted_set=set(),
)
),

View file

@ -795,7 +795,7 @@ class MSC3861OAuthDelegation(HomeserverTestCase):
req = SynapseRequest(channel, self.site) # type: ignore[arg-type]
req.client.host = MAS_IPV4_ADDR
req.requestHeaders.addRawHeader(
"Authorization", f"Bearer {self.auth._admin_token}"
"Authorization", f"Bearer {self.auth._admin_token()}"
)
req.requestHeaders.addRawHeader("User-Agent", MAS_USER_AGENT)
req.content = BytesIO(b"")

View file

@ -1267,6 +1267,38 @@ class OidcHandlerTestCase(HomeserverTestCase):
auth_provider_session_id=None,
)
@override_config(
{
"oidc_config": {
**DEFAULT_CONFIG,
"attribute_requirements": [
{"attribute": "test", "one_of": ["foo", "bar"]}
],
}
}
)
def test_attribute_requirements_one_of(self) -> None:
"""Test that auth succeeds if userinfo attribute has multiple values and CONTAINS required value"""
# userinfo with "test": ["bar"] attribute should succeed.
userinfo = {
"sub": "tester",
"username": "tester",
"test": ["bar"],
}
request, _ = self.start_authorization(userinfo)
self.get_success(self.handler.handle_oidc_callback(request))
# check that the auth handler got called as expected
self.complete_sso_login.assert_called_once_with(
"@tester:test",
self.provider.idp_id,
request,
ANY,
None,
new_user=True,
auth_provider_session_id=None,
)
@override_config(
{
"oidc_config": {

View file

@ -363,6 +363,52 @@ class SamlHandlerTestCase(HomeserverTestCase):
auth_provider_session_id=None,
)
@override_config(
{
"saml2_config": {
"attribute_requirements": [
{"attribute": "userGroup", "one_of": ["staff", "admin"]},
],
},
}
)
def test_attribute_requirements_one_of(self) -> None:
"""The required attributes can be comma-separated."""
# stub out the auth handler
auth_handler = self.hs.get_auth_handler()
auth_handler.complete_sso_login = AsyncMock() # type: ignore[method-assign]
# The response doesn't have the proper department.
saml_response = FakeAuthnResponse(
{"uid": "test_user", "username": "test_user", "userGroup": ["nogroup"]}
)
request = _mock_request()
self.get_success(
self.handler._handle_authn_response(request, saml_response, "redirect_uri")
)
auth_handler.complete_sso_login.assert_not_called()
# Add the proper attributes and it should succeed.
saml_response = FakeAuthnResponse(
{"uid": "test_user", "username": "test_user", "userGroup": ["admin"]}
)
request.reset_mock()
self.get_success(
self.handler._handle_authn_response(request, saml_response, "redirect_uri")
)
# check that the auth handler got called as expected
auth_handler.complete_sso_login.assert_called_once_with(
"@test_user:test",
"saml",
request,
"redirect_uri",
None,
new_user=True,
auth_provider_session_id=None,
)
def _mock_request() -> Mock:
"""Returns a mock which will stand in as a SynapseRequest"""

View file

@ -23,14 +23,13 @@ import shutil
import tempfile
from binascii import unhexlify
from io import BytesIO
from typing import Any, BinaryIO, ClassVar, Dict, List, Optional, Tuple, Union
from typing import Any, BinaryIO, ClassVar, Dict, List, Literal, Optional, Tuple, Union
from unittest.mock import MagicMock, Mock, patch
from urllib import parse
import attr
from parameterized import parameterized, parameterized_class
from PIL import Image as Image
from typing_extensions import Literal
from twisted.internet import defer
from twisted.internet.defer import Deferred

View file

@ -19,12 +19,11 @@
#
#
from importlib import metadata
from typing import Dict, Tuple
from typing import Dict, Protocol, Tuple
from unittest.mock import patch
from pkg_resources import parse_version
from prometheus_client.core import Sample
from typing_extensions import Protocol
from synapse.app._base import _set_prometheus_client_use_created_metrics
from synapse.metrics import REGISTRY, InFlightGauge, generate_latest

View file

@ -27,6 +27,7 @@ from typing import (
Collection,
Dict,
List,
Literal,
Optional,
Tuple,
Union,
@ -35,7 +36,6 @@ from unittest.mock import Mock
from urllib.parse import urlencode
import pymacaroons
from typing_extensions import Literal
from twisted.test.proto_helpers import MemoryReactor
from twisted.web.resource import Resource

View file

@ -24,14 +24,13 @@ import json
import os
import re
import shutil
from typing import Any, BinaryIO, Dict, List, Optional, Sequence, Tuple, Type
from typing import Any, BinaryIO, ClassVar, Dict, List, Optional, Sequence, Tuple, Type
from unittest.mock import MagicMock, Mock, patch
from urllib import parse
from urllib.parse import quote, urlencode
from parameterized import parameterized, parameterized_class
from PIL import Image as Image
from typing_extensions import ClassVar
from twisted.internet import defer
from twisted.internet._resolver import HostResolution

Some files were not shown because too many files have changed in this diff Show more